Nov 24 08:47:38 localhost kernel: Linux version 5.14.0-639.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025
Nov 24 08:47:38 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 24 08:47:38 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 24 08:47:38 localhost kernel: BIOS-provided physical RAM map:
Nov 24 08:47:38 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 24 08:47:38 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 24 08:47:38 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 24 08:47:38 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 24 08:47:38 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 24 08:47:38 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 24 08:47:38 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 24 08:47:38 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 24 08:47:38 localhost kernel: NX (Execute Disable) protection: active
Nov 24 08:47:38 localhost kernel: APIC: Static calls initialized
Nov 24 08:47:38 localhost kernel: SMBIOS 2.8 present.
Nov 24 08:47:38 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 24 08:47:38 localhost kernel: Hypervisor detected: KVM
Nov 24 08:47:38 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 24 08:47:38 localhost kernel: kvm-clock: using sched offset of 6361643521 cycles
Nov 24 08:47:38 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 24 08:47:38 localhost kernel: tsc: Detected 2799.998 MHz processor
Nov 24 08:47:38 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 24 08:47:38 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 24 08:47:38 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 24 08:47:38 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 24 08:47:38 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 24 08:47:38 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 24 08:47:38 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 24 08:47:38 localhost kernel: Using GB pages for direct mapping
Nov 24 08:47:38 localhost kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 24 08:47:38 localhost kernel: ACPI: Early table checksum verification disabled
Nov 24 08:47:38 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 24 08:47:38 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 08:47:38 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 08:47:38 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 08:47:38 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 24 08:47:38 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 08:47:38 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 08:47:38 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 24 08:47:38 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 24 08:47:38 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 24 08:47:38 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 24 08:47:38 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 24 08:47:38 localhost kernel: No NUMA configuration found
Nov 24 08:47:38 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 24 08:47:38 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 24 08:47:38 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 24 08:47:38 localhost kernel: Zone ranges:
Nov 24 08:47:38 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 24 08:47:38 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 24 08:47:38 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 24 08:47:38 localhost kernel:   Device   empty
Nov 24 08:47:38 localhost kernel: Movable zone start for each node
Nov 24 08:47:38 localhost kernel: Early memory node ranges
Nov 24 08:47:38 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 24 08:47:38 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 24 08:47:38 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 24 08:47:38 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 24 08:47:38 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 24 08:47:38 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 24 08:47:38 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 24 08:47:38 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Nov 24 08:47:38 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 24 08:47:38 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 24 08:47:38 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 24 08:47:38 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 24 08:47:38 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 24 08:47:38 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 24 08:47:38 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 24 08:47:38 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 24 08:47:38 localhost kernel: TSC deadline timer available
Nov 24 08:47:38 localhost kernel: CPU topo: Max. logical packages:   8
Nov 24 08:47:38 localhost kernel: CPU topo: Max. logical dies:       8
Nov 24 08:47:38 localhost kernel: CPU topo: Max. dies per package:   1
Nov 24 08:47:38 localhost kernel: CPU topo: Max. threads per core:   1
Nov 24 08:47:38 localhost kernel: CPU topo: Num. cores per package:     1
Nov 24 08:47:38 localhost kernel: CPU topo: Num. threads per package:   1
Nov 24 08:47:38 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 24 08:47:38 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 24 08:47:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 24 08:47:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 24 08:47:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 24 08:47:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 24 08:47:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 24 08:47:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 24 08:47:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 24 08:47:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 24 08:47:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 24 08:47:38 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 24 08:47:38 localhost kernel: Booting paravirtualized kernel on KVM
Nov 24 08:47:38 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 24 08:47:38 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 24 08:47:38 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 24 08:47:38 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Nov 24 08:47:38 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Nov 24 08:47:38 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 24 08:47:38 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 24 08:47:38 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64", will be passed to user space.
Nov 24 08:47:38 localhost kernel: random: crng init done
Nov 24 08:47:38 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 24 08:47:38 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 24 08:47:38 localhost kernel: Fallback order for Node 0: 0 
Nov 24 08:47:38 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 24 08:47:38 localhost kernel: Policy zone: Normal
Nov 24 08:47:38 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 24 08:47:38 localhost kernel: software IO TLB: area num 8.
Nov 24 08:47:38 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 24 08:47:38 localhost kernel: ftrace: allocating 49298 entries in 193 pages
Nov 24 08:47:38 localhost kernel: ftrace: allocated 193 pages with 3 groups
Nov 24 08:47:38 localhost kernel: Dynamic Preempt: voluntary
Nov 24 08:47:38 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 24 08:47:38 localhost kernel: rcu:         RCU event tracing is enabled.
Nov 24 08:47:38 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 24 08:47:38 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Nov 24 08:47:38 localhost kernel:         Rude variant of Tasks RCU enabled.
Nov 24 08:47:38 localhost kernel:         Tracing variant of Tasks RCU enabled.
Nov 24 08:47:38 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 24 08:47:38 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 24 08:47:38 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 24 08:47:38 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 24 08:47:38 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 24 08:47:38 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 24 08:47:38 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 24 08:47:38 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 24 08:47:38 localhost kernel: Console: colour VGA+ 80x25
Nov 24 08:47:38 localhost kernel: printk: console [ttyS0] enabled
Nov 24 08:47:38 localhost kernel: ACPI: Core revision 20230331
Nov 24 08:47:38 localhost kernel: APIC: Switch to symmetric I/O mode setup
Nov 24 08:47:38 localhost kernel: x2apic enabled
Nov 24 08:47:38 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Nov 24 08:47:38 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 24 08:47:38 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 24 08:47:38 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 24 08:47:38 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 24 08:47:38 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 24 08:47:38 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 24 08:47:38 localhost kernel: Spectre V2 : Mitigation: Retpolines
Nov 24 08:47:38 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 24 08:47:38 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 24 08:47:38 localhost kernel: RETBleed: Mitigation: untrained return thunk
Nov 24 08:47:38 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 24 08:47:38 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 24 08:47:38 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 24 08:47:38 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 24 08:47:38 localhost kernel: x86/bugs: return thunk changed
Nov 24 08:47:38 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 24 08:47:38 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 24 08:47:38 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 24 08:47:38 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 24 08:47:38 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 24 08:47:38 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 24 08:47:38 localhost kernel: Freeing SMP alternatives memory: 40K
Nov 24 08:47:38 localhost kernel: pid_max: default: 32768 minimum: 301
Nov 24 08:47:38 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 24 08:47:38 localhost kernel: landlock: Up and running.
Nov 24 08:47:38 localhost kernel: Yama: becoming mindful.
Nov 24 08:47:38 localhost kernel: SELinux:  Initializing.
Nov 24 08:47:38 localhost kernel: LSM support for eBPF active
Nov 24 08:47:38 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 24 08:47:38 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 24 08:47:38 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 24 08:47:38 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 24 08:47:38 localhost kernel: ... version:                0
Nov 24 08:47:38 localhost kernel: ... bit width:              48
Nov 24 08:47:38 localhost kernel: ... generic registers:      6
Nov 24 08:47:38 localhost kernel: ... value mask:             0000ffffffffffff
Nov 24 08:47:38 localhost kernel: ... max period:             00007fffffffffff
Nov 24 08:47:38 localhost kernel: ... fixed-purpose events:   0
Nov 24 08:47:38 localhost kernel: ... event mask:             000000000000003f
Nov 24 08:47:38 localhost kernel: signal: max sigframe size: 1776
Nov 24 08:47:38 localhost kernel: rcu: Hierarchical SRCU implementation.
Nov 24 08:47:38 localhost kernel: rcu:         Max phase no-delay instances is 400.
Nov 24 08:47:38 localhost kernel: smp: Bringing up secondary CPUs ...
Nov 24 08:47:38 localhost kernel: smpboot: x86: Booting SMP configuration:
Nov 24 08:47:38 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 24 08:47:38 localhost kernel: smp: Brought up 1 node, 8 CPUs
Nov 24 08:47:38 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 24 08:47:38 localhost kernel: node 0 deferred pages initialised in 9ms
Nov 24 08:47:38 localhost kernel: Memory: 7765704K/8388068K available (16384K kernel code, 5786K rwdata, 13900K rodata, 4188K init, 7176K bss, 616268K reserved, 0K cma-reserved)
Nov 24 08:47:38 localhost kernel: devtmpfs: initialized
Nov 24 08:47:38 localhost kernel: x86/mm: Memory block size: 128MB
Nov 24 08:47:38 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 24 08:47:38 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 24 08:47:38 localhost kernel: pinctrl core: initialized pinctrl subsystem
Nov 24 08:47:38 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 24 08:47:38 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 24 08:47:38 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 24 08:47:38 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 24 08:47:38 localhost kernel: audit: initializing netlink subsys (disabled)
Nov 24 08:47:38 localhost kernel: audit: type=2000 audit(1763974056.447:1): state=initialized audit_enabled=0 res=1
Nov 24 08:47:38 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 24 08:47:38 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 24 08:47:38 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 24 08:47:38 localhost kernel: cpuidle: using governor menu
Nov 24 08:47:38 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 24 08:47:38 localhost kernel: PCI: Using configuration type 1 for base access
Nov 24 08:47:38 localhost kernel: PCI: Using configuration type 1 for extended access
Nov 24 08:47:38 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 24 08:47:38 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 24 08:47:38 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 24 08:47:38 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 24 08:47:38 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 24 08:47:38 localhost kernel: Demotion targets for Node 0: null
Nov 24 08:47:38 localhost kernel: cryptd: max_cpu_qlen set to 1000
Nov 24 08:47:38 localhost kernel: ACPI: Added _OSI(Module Device)
Nov 24 08:47:38 localhost kernel: ACPI: Added _OSI(Processor Device)
Nov 24 08:47:38 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 24 08:47:38 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 24 08:47:38 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 24 08:47:38 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 24 08:47:38 localhost kernel: ACPI: Interpreter enabled
Nov 24 08:47:38 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 24 08:47:38 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Nov 24 08:47:38 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 24 08:47:38 localhost kernel: PCI: Using E820 reservations for host bridge windows
Nov 24 08:47:38 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 24 08:47:38 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 24 08:47:38 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [3] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [4] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [5] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [6] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [7] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [8] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [9] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [10] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [11] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [12] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [13] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [14] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [15] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [16] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [17] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [18] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [19] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [20] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [21] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [22] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [23] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [24] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [25] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [26] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [27] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [28] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [29] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [30] registered
Nov 24 08:47:38 localhost kernel: acpiphp: Slot [31] registered
Nov 24 08:47:38 localhost kernel: PCI host bridge to bus 0000:00
Nov 24 08:47:38 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 24 08:47:38 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 24 08:47:38 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 24 08:47:38 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 24 08:47:38 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 24 08:47:38 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 24 08:47:38 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 24 08:47:38 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 24 08:47:38 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 24 08:47:38 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 24 08:47:38 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 24 08:47:38 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 24 08:47:38 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 24 08:47:38 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 24 08:47:38 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 24 08:47:38 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 24 08:47:38 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 24 08:47:38 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 24 08:47:38 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 24 08:47:38 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 24 08:47:38 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 24 08:47:38 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 24 08:47:38 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 24 08:47:38 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 24 08:47:38 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 24 08:47:38 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 24 08:47:38 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 24 08:47:38 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 24 08:47:38 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 24 08:47:38 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 24 08:47:38 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 24 08:47:38 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 24 08:47:38 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 24 08:47:38 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 24 08:47:38 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 24 08:47:38 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 24 08:47:38 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 24 08:47:38 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 24 08:47:38 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 24 08:47:38 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 24 08:47:38 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 24 08:47:38 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 24 08:47:38 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 24 08:47:38 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 24 08:47:38 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 24 08:47:38 localhost kernel: iommu: Default domain type: Translated
Nov 24 08:47:38 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 24 08:47:38 localhost kernel: SCSI subsystem initialized
Nov 24 08:47:38 localhost kernel: ACPI: bus type USB registered
Nov 24 08:47:38 localhost kernel: usbcore: registered new interface driver usbfs
Nov 24 08:47:38 localhost kernel: usbcore: registered new interface driver hub
Nov 24 08:47:38 localhost kernel: usbcore: registered new device driver usb
Nov 24 08:47:38 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 24 08:47:38 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 24 08:47:38 localhost kernel: PTP clock support registered
Nov 24 08:47:38 localhost kernel: EDAC MC: Ver: 3.0.0
Nov 24 08:47:38 localhost kernel: NetLabel: Initializing
Nov 24 08:47:38 localhost kernel: NetLabel:  domain hash size = 128
Nov 24 08:47:38 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 24 08:47:38 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Nov 24 08:47:38 localhost kernel: PCI: Using ACPI for IRQ routing
Nov 24 08:47:38 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Nov 24 08:47:38 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 24 08:47:38 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Nov 24 08:47:38 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 24 08:47:38 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 24 08:47:38 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 24 08:47:38 localhost kernel: vgaarb: loaded
Nov 24 08:47:38 localhost kernel: clocksource: Switched to clocksource kvm-clock
Nov 24 08:47:38 localhost kernel: VFS: Disk quotas dquot_6.6.0
Nov 24 08:47:38 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 24 08:47:38 localhost kernel: pnp: PnP ACPI init
Nov 24 08:47:38 localhost kernel: pnp 00:03: [dma 2]
Nov 24 08:47:38 localhost kernel: pnp: PnP ACPI: found 5 devices
Nov 24 08:47:38 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 24 08:47:38 localhost kernel: NET: Registered PF_INET protocol family
Nov 24 08:47:38 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 24 08:47:38 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 24 08:47:38 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 24 08:47:38 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 24 08:47:38 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 24 08:47:38 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 24 08:47:38 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 24 08:47:38 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 24 08:47:38 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 24 08:47:38 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 24 08:47:38 localhost kernel: NET: Registered PF_XDP protocol family
Nov 24 08:47:38 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 24 08:47:38 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 24 08:47:38 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 24 08:47:38 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 24 08:47:38 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 24 08:47:38 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 24 08:47:38 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 24 08:47:38 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 24 08:47:38 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 74599 usecs
Nov 24 08:47:38 localhost kernel: PCI: CLS 0 bytes, default 64
Nov 24 08:47:38 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 24 08:47:38 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 24 08:47:38 localhost kernel: ACPI: bus type thunderbolt registered
Nov 24 08:47:38 localhost kernel: Trying to unpack rootfs image as initramfs...
Nov 24 08:47:38 localhost kernel: Initialise system trusted keyrings
Nov 24 08:47:38 localhost kernel: Key type blacklist registered
Nov 24 08:47:38 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 24 08:47:38 localhost kernel: zbud: loaded
Nov 24 08:47:38 localhost kernel: integrity: Platform Keyring initialized
Nov 24 08:47:38 localhost kernel: integrity: Machine keyring initialized
Nov 24 08:47:38 localhost kernel: Freeing initrd memory: 85868K
Nov 24 08:47:38 localhost kernel: NET: Registered PF_ALG protocol family
Nov 24 08:47:38 localhost kernel: xor: automatically using best checksumming function   avx       
Nov 24 08:47:38 localhost kernel: Key type asymmetric registered
Nov 24 08:47:38 localhost kernel: Asymmetric key parser 'x509' registered
Nov 24 08:47:38 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 24 08:47:38 localhost kernel: io scheduler mq-deadline registered
Nov 24 08:47:38 localhost kernel: io scheduler kyber registered
Nov 24 08:47:38 localhost kernel: io scheduler bfq registered
Nov 24 08:47:38 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 24 08:47:38 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 24 08:47:38 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 24 08:47:38 localhost kernel: ACPI: button: Power Button [PWRF]
Nov 24 08:47:38 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 24 08:47:38 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 24 08:47:38 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 24 08:47:38 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 24 08:47:38 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 24 08:47:38 localhost kernel: Non-volatile memory driver v1.3
Nov 24 08:47:38 localhost kernel: rdac: device handler registered
Nov 24 08:47:38 localhost kernel: hp_sw: device handler registered
Nov 24 08:47:38 localhost kernel: emc: device handler registered
Nov 24 08:47:38 localhost kernel: alua: device handler registered
Nov 24 08:47:38 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 24 08:47:38 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 24 08:47:38 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 24 08:47:38 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 24 08:47:38 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 24 08:47:38 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 24 08:47:38 localhost kernel: usb usb1: Product: UHCI Host Controller
Nov 24 08:47:38 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-639.el9.x86_64 uhci_hcd
Nov 24 08:47:38 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 24 08:47:38 localhost kernel: hub 1-0:1.0: USB hub found
Nov 24 08:47:38 localhost kernel: hub 1-0:1.0: 2 ports detected
Nov 24 08:47:38 localhost kernel: usbcore: registered new interface driver usbserial_generic
Nov 24 08:47:38 localhost kernel: usbserial: USB Serial support registered for generic
Nov 24 08:47:38 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 24 08:47:38 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 24 08:47:38 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 24 08:47:38 localhost kernel: mousedev: PS/2 mouse device common for all mice
Nov 24 08:47:38 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 24 08:47:38 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 24 08:47:38 localhost kernel: rtc_cmos 00:04: registered as rtc0
Nov 24 08:47:38 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-11-24T08:47:37 UTC (1763974057)
Nov 24 08:47:38 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 24 08:47:38 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 24 08:47:38 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 24 08:47:38 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 24 08:47:38 localhost kernel: usbcore: registered new interface driver usbhid
Nov 24 08:47:38 localhost kernel: usbhid: USB HID core driver
Nov 24 08:47:38 localhost kernel: drop_monitor: Initializing network drop monitor service
Nov 24 08:47:38 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 24 08:47:38 localhost kernel: Initializing XFRM netlink socket
Nov 24 08:47:38 localhost kernel: NET: Registered PF_INET6 protocol family
Nov 24 08:47:38 localhost kernel: Segment Routing with IPv6
Nov 24 08:47:38 localhost kernel: NET: Registered PF_PACKET protocol family
Nov 24 08:47:38 localhost kernel: mpls_gso: MPLS GSO support
Nov 24 08:47:38 localhost kernel: IPI shorthand broadcast: enabled
Nov 24 08:47:38 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Nov 24 08:47:38 localhost kernel: AES CTR mode by8 optimization enabled
Nov 24 08:47:38 localhost kernel: sched_clock: Marking stable (1159002040, 149605137)->(1418836053, -110228876)
Nov 24 08:47:38 localhost kernel: registered taskstats version 1
Nov 24 08:47:38 localhost kernel: Loading compiled-in X.509 certificates
Nov 24 08:47:38 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 24 08:47:38 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 24 08:47:38 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 24 08:47:38 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 24 08:47:38 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 24 08:47:38 localhost kernel: Demotion targets for Node 0: null
Nov 24 08:47:38 localhost kernel: page_owner is disabled
Nov 24 08:47:38 localhost kernel: Key type .fscrypt registered
Nov 24 08:47:38 localhost kernel: Key type fscrypt-provisioning registered
Nov 24 08:47:38 localhost kernel: Key type big_key registered
Nov 24 08:47:38 localhost kernel: Key type encrypted registered
Nov 24 08:47:38 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 24 08:47:38 localhost kernel: Loading compiled-in module X.509 certificates
Nov 24 08:47:38 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 24 08:47:38 localhost kernel: ima: Allocated hash algorithm: sha256
Nov 24 08:47:38 localhost kernel: ima: No architecture policies found
Nov 24 08:47:38 localhost kernel: evm: Initialising EVM extended attributes:
Nov 24 08:47:38 localhost kernel: evm: security.selinux
Nov 24 08:47:38 localhost kernel: evm: security.SMACK64 (disabled)
Nov 24 08:47:38 localhost kernel: evm: security.SMACK64EXEC (disabled)
Nov 24 08:47:38 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 24 08:47:38 localhost kernel: evm: security.SMACK64MMAP (disabled)
Nov 24 08:47:38 localhost kernel: evm: security.apparmor (disabled)
Nov 24 08:47:38 localhost kernel: evm: security.ima
Nov 24 08:47:38 localhost kernel: evm: security.capability
Nov 24 08:47:38 localhost kernel: evm: HMAC attrs: 0x1
Nov 24 08:47:38 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 24 08:47:38 localhost kernel: Running certificate verification RSA selftest
Nov 24 08:47:38 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 24 08:47:38 localhost kernel: Running certificate verification ECDSA selftest
Nov 24 08:47:38 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 24 08:47:38 localhost kernel: clk: Disabling unused clocks
Nov 24 08:47:38 localhost kernel: Freeing unused decrypted memory: 2028K
Nov 24 08:47:38 localhost kernel: Freeing unused kernel image (initmem) memory: 4188K
Nov 24 08:47:38 localhost kernel: Write protecting the kernel read-only data: 30720k
Nov 24 08:47:38 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 24 08:47:38 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 24 08:47:38 localhost kernel: Run /init as init process
Nov 24 08:47:38 localhost kernel:   with arguments:
Nov 24 08:47:38 localhost kernel:     /init
Nov 24 08:47:38 localhost kernel:   with environment:
Nov 24 08:47:38 localhost kernel:     HOME=/
Nov 24 08:47:38 localhost kernel:     TERM=linux
Nov 24 08:47:38 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64
Nov 24 08:47:38 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 24 08:47:38 localhost systemd[1]: Detected virtualization kvm.
Nov 24 08:47:38 localhost systemd[1]: Detected architecture x86-64.
Nov 24 08:47:38 localhost systemd[1]: Running in initrd.
Nov 24 08:47:38 localhost systemd[1]: No hostname configured, using default hostname.
Nov 24 08:47:38 localhost systemd[1]: Hostname set to <localhost>.
Nov 24 08:47:38 localhost systemd[1]: Initializing machine ID from VM UUID.
Nov 24 08:47:38 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 24 08:47:38 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 24 08:47:38 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Nov 24 08:47:38 localhost kernel: usb 1-1: Manufacturer: QEMU
Nov 24 08:47:38 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 24 08:47:38 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 24 08:47:38 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 24 08:47:38 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Nov 24 08:47:38 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 24 08:47:38 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 24 08:47:38 localhost systemd[1]: Reached target Initrd /usr File System.
Nov 24 08:47:38 localhost systemd[1]: Reached target Local File Systems.
Nov 24 08:47:38 localhost systemd[1]: Reached target Path Units.
Nov 24 08:47:38 localhost systemd[1]: Reached target Slice Units.
Nov 24 08:47:38 localhost systemd[1]: Reached target Swaps.
Nov 24 08:47:38 localhost systemd[1]: Reached target Timer Units.
Nov 24 08:47:38 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 24 08:47:38 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Nov 24 08:47:38 localhost systemd[1]: Listening on Journal Socket.
Nov 24 08:47:38 localhost systemd[1]: Listening on udev Control Socket.
Nov 24 08:47:38 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 24 08:47:38 localhost systemd[1]: Reached target Socket Units.
Nov 24 08:47:38 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 24 08:47:38 localhost systemd[1]: Starting Journal Service...
Nov 24 08:47:38 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 24 08:47:38 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 24 08:47:38 localhost systemd[1]: Starting Create System Users...
Nov 24 08:47:38 localhost systemd[1]: Starting Setup Virtual Console...
Nov 24 08:47:38 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 24 08:47:38 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 24 08:47:38 localhost systemd[1]: Finished Create System Users.
Nov 24 08:47:38 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 24 08:47:38 localhost systemd-journald[307]: Journal started
Nov 24 08:47:38 localhost systemd-journald[307]: Runtime Journal (/run/log/journal/719139db46ba4050a77b5fa732a73807) is 8.0M, max 153.6M, 145.6M free.
Nov 24 08:47:38 localhost systemd-sysusers[311]: Creating group 'users' with GID 100.
Nov 24 08:47:38 localhost systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Nov 24 08:47:38 localhost systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 24 08:47:38 localhost systemd[1]: Started Journal Service.
Nov 24 08:47:38 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 24 08:47:38 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 24 08:47:38 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 24 08:47:38 localhost systemd[1]: Finished Setup Virtual Console.
Nov 24 08:47:38 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 24 08:47:38 localhost systemd[1]: Starting dracut cmdline hook...
Nov 24 08:47:38 localhost dracut-cmdline[325]: dracut-9 dracut-057-102.git20250818.el9
Nov 24 08:47:38 localhost dracut-cmdline[325]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 24 08:47:38 localhost systemd[1]: Finished dracut cmdline hook.
Nov 24 08:47:38 localhost systemd[1]: Starting dracut pre-udev hook...
Nov 24 08:47:38 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 24 08:47:38 localhost kernel: device-mapper: uevent: version 1.0.3
Nov 24 08:47:38 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 24 08:47:38 localhost kernel: RPC: Registered named UNIX socket transport module.
Nov 24 08:47:38 localhost kernel: RPC: Registered udp transport module.
Nov 24 08:47:38 localhost kernel: RPC: Registered tcp transport module.
Nov 24 08:47:38 localhost kernel: RPC: Registered tcp-with-tls transport module.
Nov 24 08:47:38 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 24 08:47:38 localhost rpc.statd[441]: Version 2.5.4 starting
Nov 24 08:47:38 localhost rpc.statd[441]: Initializing NSM state
Nov 24 08:47:38 localhost rpc.idmapd[446]: Setting log level to 0
Nov 24 08:47:38 localhost systemd[1]: Finished dracut pre-udev hook.
Nov 24 08:47:38 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 24 08:47:38 localhost systemd-udevd[459]: Using default interface naming scheme 'rhel-9.0'.
Nov 24 08:47:38 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 24 08:47:38 localhost systemd[1]: Starting dracut pre-trigger hook...
Nov 24 08:47:38 localhost systemd[1]: Finished dracut pre-trigger hook.
Nov 24 08:47:38 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 24 08:47:38 localhost systemd[1]: Created slice Slice /system/modprobe.
Nov 24 08:47:38 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 24 08:47:38 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 24 08:47:38 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 24 08:47:38 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 24 08:47:38 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 24 08:47:38 localhost systemd[1]: Reached target Network.
Nov 24 08:47:38 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 24 08:47:38 localhost systemd[1]: Starting dracut initqueue hook...
Nov 24 08:47:38 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 24 08:47:38 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 24 08:47:38 localhost kernel:  vda: vda1
Nov 24 08:47:38 localhost kernel: libata version 3.00 loaded.
Nov 24 08:47:38 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Nov 24 08:47:38 localhost kernel: scsi host0: ata_piix
Nov 24 08:47:38 localhost kernel: scsi host1: ata_piix
Nov 24 08:47:38 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 24 08:47:38 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 24 08:47:38 localhost systemd[1]: Found device /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 24 08:47:38 localhost systemd[1]: Reached target Initrd Root Device.
Nov 24 08:47:39 localhost kernel: ata1: found unknown device (class 0)
Nov 24 08:47:39 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 24 08:47:39 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 24 08:47:39 localhost systemd-udevd[488]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 08:47:39 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 24 08:47:39 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 24 08:47:39 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 24 08:47:39 localhost systemd[1]: Mounting Kernel Configuration File System...
Nov 24 08:47:39 localhost systemd[1]: Mounted Kernel Configuration File System.
Nov 24 08:47:39 localhost systemd[1]: Reached target System Initialization.
Nov 24 08:47:39 localhost systemd[1]: Reached target Basic System.
Nov 24 08:47:39 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 24 08:47:39 localhost systemd[1]: Finished dracut initqueue hook.
Nov 24 08:47:39 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 24 08:47:39 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Nov 24 08:47:39 localhost systemd[1]: Reached target Remote File Systems.
Nov 24 08:47:39 localhost systemd[1]: Starting dracut pre-mount hook...
Nov 24 08:47:39 localhost systemd[1]: Finished dracut pre-mount hook.
Nov 24 08:47:39 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709...
Nov 24 08:47:39 localhost systemd-fsck[557]: /usr/sbin/fsck.xfs: XFS file system.
Nov 24 08:47:39 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 24 08:47:39 localhost systemd[1]: Mounting /sysroot...
Nov 24 08:47:39 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 24 08:47:39 localhost kernel: XFS (vda1): Mounting V5 Filesystem 47e3724e-7a1b-439a-9543-b98c9a290709
Nov 24 08:47:39 localhost kernel: XFS (vda1): Ending clean mount
Nov 24 08:47:39 localhost systemd[1]: Mounted /sysroot.
Nov 24 08:47:39 localhost systemd[1]: Reached target Initrd Root File System.
Nov 24 08:47:39 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 24 08:47:39 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 24 08:47:39 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 24 08:47:39 localhost systemd[1]: Reached target Initrd File Systems.
Nov 24 08:47:39 localhost systemd[1]: Reached target Initrd Default Target.
Nov 24 08:47:39 localhost systemd[1]: Starting dracut mount hook...
Nov 24 08:47:39 localhost systemd[1]: Finished dracut mount hook.
Nov 24 08:47:39 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 24 08:47:39 localhost rpc.idmapd[446]: exiting on signal 15
Nov 24 08:47:39 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 24 08:47:39 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 24 08:47:39 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 24 08:47:39 localhost systemd[1]: Stopped target Network.
Nov 24 08:47:39 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 24 08:47:39 localhost systemd[1]: Stopped target Timer Units.
Nov 24 08:47:39 localhost systemd[1]: dbus.socket: Deactivated successfully.
Nov 24 08:47:39 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 24 08:47:39 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 24 08:47:39 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 24 08:47:39 localhost systemd[1]: Stopped target Initrd Default Target.
Nov 24 08:47:39 localhost systemd[1]: Stopped target Basic System.
Nov 24 08:47:39 localhost systemd[1]: Stopped target Initrd Root Device.
Nov 24 08:47:39 localhost systemd[1]: Stopped target Initrd /usr File System.
Nov 24 08:47:39 localhost systemd[1]: Stopped target Path Units.
Nov 24 08:47:39 localhost systemd[1]: Stopped target Remote File Systems.
Nov 24 08:47:39 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 24 08:47:39 localhost systemd[1]: Stopped target Slice Units.
Nov 24 08:47:39 localhost systemd[1]: Stopped target Socket Units.
Nov 24 08:47:39 localhost systemd[1]: Stopped target System Initialization.
Nov 24 08:47:39 localhost systemd[1]: Stopped target Local File Systems.
Nov 24 08:47:39 localhost systemd[1]: Stopped target Swaps.
Nov 24 08:47:39 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 24 08:47:39 localhost systemd[1]: Stopped dracut mount hook.
Nov 24 08:47:39 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 24 08:47:39 localhost systemd[1]: Stopped dracut pre-mount hook.
Nov 24 08:47:39 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Nov 24 08:47:39 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 24 08:47:39 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 24 08:47:39 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 24 08:47:39 localhost systemd[1]: Stopped dracut initqueue hook.
Nov 24 08:47:39 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 24 08:47:39 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 24 08:47:39 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 24 08:47:39 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Nov 24 08:47:39 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 24 08:47:39 localhost systemd[1]: Stopped Coldplug All udev Devices.
Nov 24 08:47:39 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 24 08:47:39 localhost systemd[1]: Stopped dracut pre-trigger hook.
Nov 24 08:47:39 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 24 08:47:39 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 24 08:47:39 localhost systemd[1]: Stopped Setup Virtual Console.
Nov 24 08:47:39 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 24 08:47:39 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 24 08:47:39 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 24 08:47:39 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 24 08:47:39 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 24 08:47:39 localhost systemd[1]: Closed udev Control Socket.
Nov 24 08:47:39 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 24 08:47:39 localhost systemd[1]: Closed udev Kernel Socket.
Nov 24 08:47:39 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 24 08:47:39 localhost systemd[1]: Stopped dracut pre-udev hook.
Nov 24 08:47:39 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 24 08:47:39 localhost systemd[1]: Stopped dracut cmdline hook.
Nov 24 08:47:39 localhost systemd[1]: Starting Cleanup udev Database...
Nov 24 08:47:39 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 24 08:47:39 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 24 08:47:39 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 24 08:47:39 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Nov 24 08:47:39 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 24 08:47:39 localhost systemd[1]: Stopped Create System Users.
Nov 24 08:47:39 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 24 08:47:39 localhost systemd[1]: Finished Cleanup udev Database.
Nov 24 08:47:39 localhost systemd[1]: Reached target Switch Root.
Nov 24 08:47:39 localhost systemd[1]: Starting Switch Root...
Nov 24 08:47:40 localhost systemd[1]: Switching root.
Nov 24 08:47:40 localhost systemd-journald[307]: Journal stopped
Nov 24 08:47:40 localhost systemd-journald[307]: Received SIGTERM from PID 1 (systemd).
Nov 24 08:47:40 localhost kernel: audit: type=1404 audit(1763974060.191:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 24 08:47:40 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 08:47:40 localhost kernel: SELinux:  policy capability open_perms=1
Nov 24 08:47:40 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 08:47:40 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 24 08:47:40 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 08:47:40 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 08:47:40 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 08:47:40 localhost kernel: audit: type=1403 audit(1763974060.376:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 24 08:47:40 localhost systemd[1]: Successfully loaded SELinux policy in 190.680ms.
Nov 24 08:47:40 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 39.903ms.
Nov 24 08:47:40 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 24 08:47:40 localhost systemd[1]: Detected virtualization kvm.
Nov 24 08:47:40 localhost systemd[1]: Detected architecture x86-64.
Nov 24 08:47:40 localhost systemd-rc-local-generator[638]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 08:47:40 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Nov 24 08:47:40 localhost systemd[1]: Stopped Switch Root.
Nov 24 08:47:40 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 24 08:47:40 localhost systemd[1]: Created slice Slice /system/getty.
Nov 24 08:47:40 localhost systemd[1]: Created slice Slice /system/serial-getty.
Nov 24 08:47:40 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Nov 24 08:47:40 localhost systemd[1]: Created slice User and Session Slice.
Nov 24 08:47:40 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 24 08:47:40 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 24 08:47:40 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 24 08:47:40 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 24 08:47:40 localhost systemd[1]: Stopped target Switch Root.
Nov 24 08:47:40 localhost systemd[1]: Stopped target Initrd File Systems.
Nov 24 08:47:40 localhost systemd[1]: Stopped target Initrd Root File System.
Nov 24 08:47:40 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Nov 24 08:47:40 localhost systemd[1]: Reached target Path Units.
Nov 24 08:47:40 localhost systemd[1]: Reached target rpc_pipefs.target.
Nov 24 08:47:40 localhost systemd[1]: Reached target Slice Units.
Nov 24 08:47:40 localhost systemd[1]: Reached target Swaps.
Nov 24 08:47:40 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Nov 24 08:47:40 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Nov 24 08:47:40 localhost systemd[1]: Reached target RPC Port Mapper.
Nov 24 08:47:40 localhost systemd[1]: Listening on Process Core Dump Socket.
Nov 24 08:47:40 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Nov 24 08:47:40 localhost systemd[1]: Listening on udev Control Socket.
Nov 24 08:47:40 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 24 08:47:40 localhost systemd[1]: Mounting Huge Pages File System...
Nov 24 08:47:40 localhost systemd[1]: Mounting POSIX Message Queue File System...
Nov 24 08:47:40 localhost systemd[1]: Mounting Kernel Debug File System...
Nov 24 08:47:40 localhost systemd[1]: Mounting Kernel Trace File System...
Nov 24 08:47:40 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 24 08:47:40 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 24 08:47:40 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 24 08:47:40 localhost systemd[1]: Starting Load Kernel Module drm...
Nov 24 08:47:40 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Nov 24 08:47:40 localhost systemd[1]: Starting Load Kernel Module fuse...
Nov 24 08:47:40 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 24 08:47:40 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Nov 24 08:47:40 localhost systemd[1]: Stopped File System Check on Root Device.
Nov 24 08:47:40 localhost systemd[1]: Stopped Journal Service.
Nov 24 08:47:40 localhost systemd[1]: Starting Journal Service...
Nov 24 08:47:40 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 24 08:47:40 localhost systemd[1]: Starting Generate network units from Kernel command line...
Nov 24 08:47:40 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 24 08:47:40 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Nov 24 08:47:40 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 24 08:47:40 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 24 08:47:40 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 24 08:47:40 localhost kernel: fuse: init (API version 7.37)
Nov 24 08:47:40 localhost systemd[1]: Mounted Huge Pages File System.
Nov 24 08:47:40 localhost systemd[1]: Mounted POSIX Message Queue File System.
Nov 24 08:47:40 localhost systemd-journald[679]: Journal started
Nov 24 08:47:40 localhost systemd-journald[679]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 24 08:47:40 localhost systemd[1]: Queued start job for default target Multi-User System.
Nov 24 08:47:40 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 24 08:47:40 localhost systemd[1]: Started Journal Service.
Nov 24 08:47:40 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 24 08:47:40 localhost systemd[1]: Mounted Kernel Debug File System.
Nov 24 08:47:40 localhost systemd[1]: Mounted Kernel Trace File System.
Nov 24 08:47:40 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 24 08:47:40 localhost kernel: ACPI: bus type drm_connector registered
Nov 24 08:47:40 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 24 08:47:40 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 24 08:47:40 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 24 08:47:40 localhost systemd[1]: Finished Load Kernel Module drm.
Nov 24 08:47:40 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 24 08:47:40 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 24 08:47:40 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 24 08:47:40 localhost systemd[1]: Finished Load Kernel Module fuse.
Nov 24 08:47:40 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 24 08:47:40 localhost systemd[1]: Finished Generate network units from Kernel command line.
Nov 24 08:47:40 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 24 08:47:40 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 24 08:47:41 localhost systemd[1]: Mounting FUSE Control File System...
Nov 24 08:47:41 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 24 08:47:41 localhost systemd[1]: Starting Rebuild Hardware Database...
Nov 24 08:47:41 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 24 08:47:41 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 24 08:47:41 localhost systemd[1]: Starting Load/Save OS Random Seed...
Nov 24 08:47:41 localhost systemd[1]: Starting Create System Users...
Nov 24 08:47:41 localhost systemd[1]: Mounted FUSE Control File System.
Nov 24 08:47:41 localhost systemd-journald[679]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 24 08:47:41 localhost systemd-journald[679]: Received client request to flush runtime journal.
Nov 24 08:47:41 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 24 08:47:41 localhost systemd[1]: Finished Load/Save OS Random Seed.
Nov 24 08:47:41 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 24 08:47:41 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 24 08:47:41 localhost systemd[1]: Finished Create System Users.
Nov 24 08:47:41 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 24 08:47:41 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 24 08:47:41 localhost systemd[1]: Reached target Preparation for Local File Systems.
Nov 24 08:47:41 localhost systemd[1]: Reached target Local File Systems.
Nov 24 08:47:41 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 24 08:47:41 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 24 08:47:41 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 24 08:47:41 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 24 08:47:41 localhost systemd[1]: Starting Automatic Boot Loader Update...
Nov 24 08:47:41 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 24 08:47:41 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 24 08:47:41 localhost bootctl[697]: Couldn't find EFI system partition, skipping.
Nov 24 08:47:41 localhost systemd[1]: Finished Automatic Boot Loader Update.
Nov 24 08:47:41 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 24 08:47:41 localhost systemd[1]: Starting Security Auditing Service...
Nov 24 08:47:41 localhost systemd[1]: Starting RPC Bind...
Nov 24 08:47:41 localhost systemd[1]: Starting Rebuild Journal Catalog...
Nov 24 08:47:41 localhost auditd[703]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 24 08:47:41 localhost auditd[703]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 24 08:47:41 localhost systemd[1]: Finished Rebuild Journal Catalog.
Nov 24 08:47:41 localhost systemd[1]: Started RPC Bind.
Nov 24 08:47:41 localhost augenrules[708]: /sbin/augenrules: No change
Nov 24 08:47:41 localhost augenrules[723]: No rules
Nov 24 08:47:41 localhost augenrules[723]: enabled 1
Nov 24 08:47:41 localhost augenrules[723]: failure 1
Nov 24 08:47:41 localhost augenrules[723]: pid 703
Nov 24 08:47:41 localhost augenrules[723]: rate_limit 0
Nov 24 08:47:41 localhost augenrules[723]: backlog_limit 8192
Nov 24 08:47:41 localhost augenrules[723]: lost 0
Nov 24 08:47:41 localhost augenrules[723]: backlog 0
Nov 24 08:47:41 localhost augenrules[723]: backlog_wait_time 60000
Nov 24 08:47:41 localhost augenrules[723]: backlog_wait_time_actual 0
Nov 24 08:47:41 localhost augenrules[723]: enabled 1
Nov 24 08:47:41 localhost augenrules[723]: failure 1
Nov 24 08:47:41 localhost augenrules[723]: pid 703
Nov 24 08:47:41 localhost augenrules[723]: rate_limit 0
Nov 24 08:47:41 localhost augenrules[723]: backlog_limit 8192
Nov 24 08:47:41 localhost augenrules[723]: lost 0
Nov 24 08:47:41 localhost augenrules[723]: backlog 0
Nov 24 08:47:41 localhost augenrules[723]: backlog_wait_time 60000
Nov 24 08:47:41 localhost augenrules[723]: backlog_wait_time_actual 0
Nov 24 08:47:41 localhost augenrules[723]: enabled 1
Nov 24 08:47:41 localhost augenrules[723]: failure 1
Nov 24 08:47:41 localhost augenrules[723]: pid 703
Nov 24 08:47:41 localhost augenrules[723]: rate_limit 0
Nov 24 08:47:41 localhost augenrules[723]: backlog_limit 8192
Nov 24 08:47:41 localhost augenrules[723]: lost 0
Nov 24 08:47:41 localhost augenrules[723]: backlog 0
Nov 24 08:47:41 localhost augenrules[723]: backlog_wait_time 60000
Nov 24 08:47:41 localhost augenrules[723]: backlog_wait_time_actual 0
Nov 24 08:47:41 localhost systemd[1]: Started Security Auditing Service.
Nov 24 08:47:41 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 24 08:47:41 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 24 08:47:41 localhost systemd[1]: Finished Rebuild Hardware Database.
Nov 24 08:47:41 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 24 08:47:41 localhost systemd-udevd[731]: Using default interface naming scheme 'rhel-9.0'.
Nov 24 08:47:41 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 24 08:47:41 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 24 08:47:41 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 24 08:47:41 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 24 08:47:41 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 24 08:47:41 localhost systemd[1]: Starting Update is Completed...
Nov 24 08:47:41 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 24 08:47:41 localhost systemd[1]: Finished Update is Completed.
Nov 24 08:47:41 localhost systemd-udevd[732]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 08:47:41 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 24 08:47:41 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 24 08:47:41 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 24 08:47:41 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 24 08:47:41 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 24 08:47:41 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 24 08:47:41 localhost kernel: Console: switching to colour dummy device 80x25
Nov 24 08:47:41 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 24 08:47:41 localhost kernel: [drm] features: -context_init
Nov 24 08:47:41 localhost kernel: [drm] number of scanouts: 1
Nov 24 08:47:41 localhost kernel: [drm] number of cap sets: 0
Nov 24 08:47:41 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 24 08:47:41 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 24 08:47:41 localhost kernel: Console: switching to colour frame buffer device 128x48
Nov 24 08:47:41 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 24 08:47:41 localhost kernel: kvm_amd: TSC scaling supported
Nov 24 08:47:41 localhost kernel: kvm_amd: Nested Virtualization enabled
Nov 24 08:47:41 localhost kernel: kvm_amd: Nested Paging enabled
Nov 24 08:47:41 localhost kernel: kvm_amd: LBR virtualization supported
Nov 24 08:47:41 localhost systemd[1]: Reached target System Initialization.
Nov 24 08:47:41 localhost systemd[1]: Started dnf makecache --timer.
Nov 24 08:47:41 localhost systemd[1]: Started Daily rotation of log files.
Nov 24 08:47:41 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 24 08:47:41 localhost systemd[1]: Reached target Timer Units.
Nov 24 08:47:41 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 24 08:47:41 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 24 08:47:41 localhost systemd[1]: Reached target Socket Units.
Nov 24 08:47:41 localhost systemd[1]: Starting D-Bus System Message Bus...
Nov 24 08:47:42 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 24 08:47:42 localhost systemd[1]: Started D-Bus System Message Bus.
Nov 24 08:47:42 localhost dbus-broker-lau[791]: Ready
Nov 24 08:47:42 localhost systemd[1]: Reached target Basic System.
Nov 24 08:47:42 localhost systemd[1]: Starting NTP client/server...
Nov 24 08:47:42 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 24 08:47:42 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 24 08:47:42 localhost systemd[1]: Starting IPv4 firewall with iptables...
Nov 24 08:47:42 localhost systemd[1]: Started irqbalance daemon.
Nov 24 08:47:42 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 24 08:47:42 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 08:47:42 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 08:47:42 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 08:47:42 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 24 08:47:42 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 24 08:47:42 localhost systemd[1]: Reached target User and Group Name Lookups.
Nov 24 08:47:42 localhost systemd[1]: Starting User Login Management...
Nov 24 08:47:42 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 24 08:47:42 localhost chronyd[831]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 24 08:47:42 localhost chronyd[831]: Loaded 0 symmetric keys
Nov 24 08:47:42 localhost chronyd[831]: Using right/UTC timezone to obtain leap second data
Nov 24 08:47:42 localhost chronyd[831]: Loaded seccomp filter (level 2)
Nov 24 08:47:42 localhost systemd[1]: Started NTP client/server.
Nov 24 08:47:42 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 24 08:47:42 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 24 08:47:42 localhost systemd-logind[823]: New seat seat0.
Nov 24 08:47:42 localhost systemd-logind[823]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 24 08:47:42 localhost systemd-logind[823]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 24 08:47:42 localhost systemd[1]: Started User Login Management.
Nov 24 08:47:42 localhost iptables.init[817]: iptables: Applying firewall rules: [  OK  ]
Nov 24 08:47:42 localhost systemd[1]: Finished IPv4 firewall with iptables.
Nov 24 08:47:42 localhost cloud-init[840]: Cloud-init v. 24.4-7.el9 running 'init-local' at Mon, 24 Nov 2025 08:47:42 +0000. Up 6.36 seconds.
Nov 24 08:47:42 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 24 08:47:42 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Nov 24 08:47:42 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpb89p5b3b.mount: Deactivated successfully.
Nov 24 08:47:43 localhost systemd[1]: Starting Hostname Service...
Nov 24 08:47:43 localhost systemd[1]: Started Hostname Service.
Nov 24 08:47:43 np0005533252.novalocal systemd-hostnamed[854]: Hostname set to <np0005533252.novalocal> (static)
Nov 24 08:47:43 np0005533252.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 24 08:47:43 np0005533252.novalocal systemd[1]: Reached target Preparation for Network.
Nov 24 08:47:43 np0005533252.novalocal systemd[1]: Starting Network Manager...
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.2673] NetworkManager (version 1.54.1-1.el9) is starting... (boot:e3886539-ea72-4427-b33b-0060f8fadd32)
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.2677] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.2808] manager[0x55c85cd4c080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.2846] hostname: hostname: using hostnamed
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.2847] hostname: static hostname changed from (none) to "np0005533252.novalocal"
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.2850] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.2950] manager[0x55c85cd4c080]: rfkill: Wi-Fi hardware radio set enabled
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.2950] manager[0x55c85cd4c080]: rfkill: WWAN hardware radio set enabled
Nov 24 08:47:43 np0005533252.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3031] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3031] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3031] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3032] manager: Networking is enabled by state file
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3033] settings: Loaded settings plugin: keyfile (internal)
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3067] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3087] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3112] dhcp: init: Using DHCP client 'internal'
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3115] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3127] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3138] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3145] device (lo): Activation: starting connection 'lo' (3dc9a73f-5008-4d54-b1f5-ae0263930821)
Nov 24 08:47:43 np0005533252.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3152] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3155] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 08:47:43 np0005533252.novalocal systemd[1]: Started Network Manager.
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3180] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3185] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3186] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3187] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3188] device (eth0): carrier: link connected
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3191] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 24 08:47:43 np0005533252.novalocal systemd[1]: Reached target Network.
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3197] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3205] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3208] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3209] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3211] manager: NetworkManager state is now CONNECTING
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3211] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3217] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3219] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 08:47:43 np0005533252.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 24 08:47:43 np0005533252.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 24 08:47:43 np0005533252.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3331] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3334] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 24 08:47:43 np0005533252.novalocal NetworkManager[858]: <info>  [1763974063.3339] device (lo): Activation: successful, device activated.
Nov 24 08:47:43 np0005533252.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Nov 24 08:47:43 np0005533252.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 24 08:47:43 np0005533252.novalocal systemd[1]: Reached target NFS client services.
Nov 24 08:47:43 np0005533252.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Nov 24 08:47:43 np0005533252.novalocal systemd[1]: Reached target Remote File Systems.
Nov 24 08:47:43 np0005533252.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 24 08:47:45 np0005533252.novalocal NetworkManager[858]: <info>  [1763974065.3089] dhcp4 (eth0): state changed new lease, address=38.129.56.228
Nov 24 08:47:45 np0005533252.novalocal NetworkManager[858]: <info>  [1763974065.3100] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 24 08:47:45 np0005533252.novalocal NetworkManager[858]: <info>  [1763974065.3131] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 08:47:45 np0005533252.novalocal NetworkManager[858]: <info>  [1763974065.3160] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 08:47:45 np0005533252.novalocal NetworkManager[858]: <info>  [1763974065.3162] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 08:47:45 np0005533252.novalocal NetworkManager[858]: <info>  [1763974065.3166] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 08:47:45 np0005533252.novalocal NetworkManager[858]: <info>  [1763974065.3176] device (eth0): Activation: successful, device activated.
Nov 24 08:47:45 np0005533252.novalocal NetworkManager[858]: <info>  [1763974065.3182] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 24 08:47:45 np0005533252.novalocal NetworkManager[858]: <info>  [1763974065.3184] manager: startup complete
Nov 24 08:47:45 np0005533252.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 24 08:47:45 np0005533252.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: Cloud-init v. 24.4-7.el9 running 'init' at Mon, 24 Nov 2025 08:47:45 +0000. Up 9.20 seconds.
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: ci-info: |  eth0  | True |        38.129.56.228         | 255.255.255.0 | global | fa:16:3e:c1:ba:0c |
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fec1:ba0c/64 |       .       |  link  | fa:16:3e:c1:ba:0c |
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Nov 24 08:47:45 np0005533252.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 24 08:47:46 np0005533252.novalocal useradd[988]: new group: name=cloud-user, GID=1001
Nov 24 08:47:46 np0005533252.novalocal useradd[988]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Nov 24 08:47:46 np0005533252.novalocal useradd[988]: add 'cloud-user' to group 'adm'
Nov 24 08:47:46 np0005533252.novalocal useradd[988]: add 'cloud-user' to group 'systemd-journal'
Nov 24 08:47:46 np0005533252.novalocal useradd[988]: add 'cloud-user' to shadow group 'adm'
Nov 24 08:47:46 np0005533252.novalocal useradd[988]: add 'cloud-user' to shadow group 'systemd-journal'
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: Generating public/private rsa key pair.
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: The key fingerprint is:
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: SHA256:zpyeWu5GMpvaw/fpqkzvhP9uZpI0QYDmP63axQQnNXw root@np0005533252.novalocal
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: The key's randomart image is:
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: +---[RSA 3072]----+
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |     ..oo        |
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |    o  .o.E      |
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |   o  o...       |
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |    .  +.        |
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |     . .S.       |
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |      =B=.       |
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |     .oXXo       |
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |     =*O=.+.     |
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |    oo*OX@*      |
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: +----[SHA256]-----+
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: Generating public/private ecdsa key pair.
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: The key fingerprint is:
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: SHA256:ZR0jNIt3UA2D6cs/W7BzPpLQPrPPUSlek7X4asUlSkQ root@np0005533252.novalocal
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: The key's randomart image is:
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: +---[ECDSA 256]---+
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |         .==Eo   |
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |         .o*.+.  |
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |        ..=.o   .|
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |         +... o *|
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |        S. +.+.Bo|
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |          + +o++.|
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |           +oo=. |
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |            OBo. |
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |            +X+. |
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: +----[SHA256]-----+
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: Generating public/private ed25519 key pair.
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: The key fingerprint is:
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: SHA256:s86N8E5lA+Hy3QoUOilffzXzSRUjOJBPKV3J7GrT9k0 root@np0005533252.novalocal
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: The key's randomart image is:
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: +--[ED25519 256]--+
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |        o.+ *o.oo|
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |       + = * +. o|
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |    . = = + o +. |
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |     o * + o o.+.|
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |      . S * =  ..|
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |         * B o  E|
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |      . o o o ...|
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |       * o     ..|
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: |       .* .      |
Nov 24 08:47:46 np0005533252.novalocal cloud-init[921]: +----[SHA256]-----+
Nov 24 08:47:46 np0005533252.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Nov 24 08:47:46 np0005533252.novalocal systemd[1]: Reached target Cloud-config availability.
Nov 24 08:47:46 np0005533252.novalocal systemd[1]: Reached target Network is Online.
Nov 24 08:47:46 np0005533252.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Nov 24 08:47:46 np0005533252.novalocal systemd[1]: Starting Crash recovery kernel arming...
Nov 24 08:47:46 np0005533252.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Nov 24 08:47:46 np0005533252.novalocal systemd[1]: Starting System Logging Service...
Nov 24 08:47:46 np0005533252.novalocal sm-notify[1004]: Version 2.5.4 starting
Nov 24 08:47:46 np0005533252.novalocal systemd[1]: Starting OpenSSH server daemon...
Nov 24 08:47:46 np0005533252.novalocal systemd[1]: Starting Permit User Sessions...
Nov 24 08:47:46 np0005533252.novalocal systemd[1]: Started Notify NFS peers of a restart.
Nov 24 08:47:46 np0005533252.novalocal sshd[1006]: Server listening on 0.0.0.0 port 22.
Nov 24 08:47:46 np0005533252.novalocal sshd[1006]: Server listening on :: port 22.
Nov 24 08:47:46 np0005533252.novalocal systemd[1]: Started OpenSSH server daemon.
Nov 24 08:47:46 np0005533252.novalocal systemd[1]: Finished Permit User Sessions.
Nov 24 08:47:46 np0005533252.novalocal systemd[1]: Started Command Scheduler.
Nov 24 08:47:46 np0005533252.novalocal systemd[1]: Started Getty on tty1.
Nov 24 08:47:46 np0005533252.novalocal systemd[1]: Started Serial Getty on ttyS0.
Nov 24 08:47:46 np0005533252.novalocal systemd[1]: Reached target Login Prompts.
Nov 24 08:47:46 np0005533252.novalocal crond[1009]: (CRON) STARTUP (1.5.7)
Nov 24 08:47:46 np0005533252.novalocal crond[1009]: (CRON) INFO (Syslog will be used instead of sendmail.)
Nov 24 08:47:46 np0005533252.novalocal crond[1009]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 55% if used.)
Nov 24 08:47:46 np0005533252.novalocal crond[1009]: (CRON) INFO (running with inotify support)
Nov 24 08:47:47 np0005533252.novalocal rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] start
Nov 24 08:47:47 np0005533252.novalocal rsyslogd[1005]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 24 08:47:47 np0005533252.novalocal systemd[1]: Started System Logging Service.
Nov 24 08:47:47 np0005533252.novalocal systemd[1]: Reached target Multi-User System.
Nov 24 08:47:47 np0005533252.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 24 08:47:47 np0005533252.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 24 08:47:47 np0005533252.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 24 08:47:47 np0005533252.novalocal rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 08:47:47 np0005533252.novalocal kdumpctl[1018]: kdump: No kdump initial ramdisk found.
Nov 24 08:47:47 np0005533252.novalocal kdumpctl[1018]: kdump: Rebuilding /boot/initramfs-5.14.0-639.el9.x86_64kdump.img
Nov 24 08:47:47 np0005533252.novalocal cloud-init[1127]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Mon, 24 Nov 2025 08:47:47 +0000. Up 10.83 seconds.
Nov 24 08:47:47 np0005533252.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Nov 24 08:47:47 np0005533252.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Nov 24 08:47:47 np0005533252.novalocal sshd-session[1173]: Unable to negotiate with 38.102.83.114 port 59792: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Nov 24 08:47:47 np0005533252.novalocal sshd-session[1188]: Connection reset by 38.102.83.114 port 59808 [preauth]
Nov 24 08:47:47 np0005533252.novalocal sshd-session[1198]: Unable to negotiate with 38.102.83.114 port 59824: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Nov 24 08:47:47 np0005533252.novalocal sshd-session[1211]: Unable to negotiate with 38.102.83.114 port 59836: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Nov 24 08:47:47 np0005533252.novalocal sshd-session[1154]: Connection closed by 38.102.83.114 port 59784 [preauth]
Nov 24 08:47:47 np0005533252.novalocal sshd-session[1223]: Connection reset by 38.102.83.114 port 59844 [preauth]
Nov 24 08:47:47 np0005533252.novalocal sshd-session[1256]: Unable to negotiate with 38.102.83.114 port 59872: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Nov 24 08:47:47 np0005533252.novalocal sshd-session[1266]: Unable to negotiate with 38.102.83.114 port 59888: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Nov 24 08:47:47 np0005533252.novalocal dracut[1283]: dracut-057-102.git20250818.el9
Nov 24 08:47:47 np0005533252.novalocal sshd-session[1247]: Connection closed by 38.102.83.114 port 59860 [preauth]
Nov 24 08:47:47 np0005533252.novalocal cloud-init[1301]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Mon, 24 Nov 2025 08:47:47 +0000. Up 11.23 seconds.
Nov 24 08:47:47 np0005533252.novalocal cloud-init[1312]: #############################################################
Nov 24 08:47:47 np0005533252.novalocal cloud-init[1314]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 24 08:47:47 np0005533252.novalocal dracut[1285]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-639.el9.x86_64kdump.img 5.14.0-639.el9.x86_64
Nov 24 08:47:47 np0005533252.novalocal cloud-init[1322]: 256 SHA256:ZR0jNIt3UA2D6cs/W7BzPpLQPrPPUSlek7X4asUlSkQ root@np0005533252.novalocal (ECDSA)
Nov 24 08:47:47 np0005533252.novalocal cloud-init[1327]: 256 SHA256:s86N8E5lA+Hy3QoUOilffzXzSRUjOJBPKV3J7GrT9k0 root@np0005533252.novalocal (ED25519)
Nov 24 08:47:47 np0005533252.novalocal cloud-init[1332]: 3072 SHA256:zpyeWu5GMpvaw/fpqkzvhP9uZpI0QYDmP63axQQnNXw root@np0005533252.novalocal (RSA)
Nov 24 08:47:47 np0005533252.novalocal cloud-init[1334]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 24 08:47:47 np0005533252.novalocal cloud-init[1336]: #############################################################
Nov 24 08:47:47 np0005533252.novalocal cloud-init[1301]: Cloud-init v. 24.4-7.el9 finished at Mon, 24 Nov 2025 08:47:47 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.39 seconds
Nov 24 08:47:47 np0005533252.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Nov 24 08:47:47 np0005533252.novalocal systemd[1]: Reached target Cloud-init target.
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: Module 'resume' will not be installed, because it's in the list to be omitted!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: memstrack is not available
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 24 08:47:48 np0005533252.novalocal dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 24 08:47:49 np0005533252.novalocal dracut[1285]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 24 08:47:49 np0005533252.novalocal dracut[1285]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 24 08:47:49 np0005533252.novalocal dracut[1285]: memstrack is not available
Nov 24 08:47:49 np0005533252.novalocal dracut[1285]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 24 08:47:49 np0005533252.novalocal dracut[1285]: *** Including module: systemd ***
Nov 24 08:47:49 np0005533252.novalocal dracut[1285]: *** Including module: fips ***
Nov 24 08:47:49 np0005533252.novalocal dracut[1285]: *** Including module: systemd-initrd ***
Nov 24 08:47:49 np0005533252.novalocal dracut[1285]: *** Including module: i18n ***
Nov 24 08:47:49 np0005533252.novalocal dracut[1285]: *** Including module: drm ***
Nov 24 08:47:50 np0005533252.novalocal dracut[1285]: *** Including module: prefixdevname ***
Nov 24 08:47:50 np0005533252.novalocal dracut[1285]: *** Including module: kernel-modules ***
Nov 24 08:47:50 np0005533252.novalocal kernel: block vda: the capability attribute has been deprecated.
Nov 24 08:47:50 np0005533252.novalocal chronyd[831]: Selected source 216.197.156.83 (2.centos.pool.ntp.org)
Nov 24 08:47:50 np0005533252.novalocal chronyd[831]: System clock TAI offset set to 37 seconds
Nov 24 08:47:50 np0005533252.novalocal dracut[1285]: *** Including module: kernel-modules-extra ***
Nov 24 08:47:50 np0005533252.novalocal dracut[1285]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Nov 24 08:47:50 np0005533252.novalocal dracut[1285]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Nov 24 08:47:50 np0005533252.novalocal dracut[1285]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Nov 24 08:47:50 np0005533252.novalocal dracut[1285]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Nov 24 08:47:50 np0005533252.novalocal dracut[1285]: *** Including module: qemu ***
Nov 24 08:47:50 np0005533252.novalocal dracut[1285]: *** Including module: fstab-sys ***
Nov 24 08:47:50 np0005533252.novalocal dracut[1285]: *** Including module: rootfs-block ***
Nov 24 08:47:50 np0005533252.novalocal dracut[1285]: *** Including module: terminfo ***
Nov 24 08:47:50 np0005533252.novalocal dracut[1285]: *** Including module: udev-rules ***
Nov 24 08:47:51 np0005533252.novalocal dracut[1285]: Skipping udev rule: 91-permissions.rules
Nov 24 08:47:51 np0005533252.novalocal dracut[1285]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 24 08:47:51 np0005533252.novalocal dracut[1285]: *** Including module: virtiofs ***
Nov 24 08:47:51 np0005533252.novalocal dracut[1285]: *** Including module: dracut-systemd ***
Nov 24 08:47:51 np0005533252.novalocal dracut[1285]: *** Including module: usrmount ***
Nov 24 08:47:51 np0005533252.novalocal dracut[1285]: *** Including module: base ***
Nov 24 08:47:51 np0005533252.novalocal chronyd[831]: Selected source 23.133.168.246 (2.centos.pool.ntp.org)
Nov 24 08:47:51 np0005533252.novalocal dracut[1285]: *** Including module: fs-lib ***
Nov 24 08:47:51 np0005533252.novalocal dracut[1285]: *** Including module: kdumpbase ***
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]:   microcode_ctl module: mangling fw_dir
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]:     microcode_ctl: configuration "intel" is ignored
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 24 08:47:52 np0005533252.novalocal irqbalance[818]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 24 08:47:52 np0005533252.novalocal irqbalance[818]: IRQ 25 affinity is now unmanaged
Nov 24 08:47:52 np0005533252.novalocal irqbalance[818]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 24 08:47:52 np0005533252.novalocal irqbalance[818]: IRQ 31 affinity is now unmanaged
Nov 24 08:47:52 np0005533252.novalocal irqbalance[818]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 24 08:47:52 np0005533252.novalocal irqbalance[818]: IRQ 28 affinity is now unmanaged
Nov 24 08:47:52 np0005533252.novalocal irqbalance[818]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 24 08:47:52 np0005533252.novalocal irqbalance[818]: IRQ 32 affinity is now unmanaged
Nov 24 08:47:52 np0005533252.novalocal irqbalance[818]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 24 08:47:52 np0005533252.novalocal irqbalance[818]: IRQ 30 affinity is now unmanaged
Nov 24 08:47:52 np0005533252.novalocal irqbalance[818]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 24 08:47:52 np0005533252.novalocal irqbalance[818]: IRQ 29 affinity is now unmanaged
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]: *** Including module: openssl ***
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]: *** Including module: shutdown ***
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]: *** Including module: squash ***
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]: *** Including modules done ***
Nov 24 08:47:52 np0005533252.novalocal dracut[1285]: *** Installing kernel module dependencies ***
Nov 24 08:47:53 np0005533252.novalocal dracut[1285]: *** Installing kernel module dependencies done ***
Nov 24 08:47:53 np0005533252.novalocal dracut[1285]: *** Resolving executable dependencies ***
Nov 24 08:47:54 np0005533252.novalocal dracut[1285]: *** Resolving executable dependencies done ***
Nov 24 08:47:54 np0005533252.novalocal dracut[1285]: *** Generating early-microcode cpio image ***
Nov 24 08:47:54 np0005533252.novalocal dracut[1285]: *** Store current command line parameters ***
Nov 24 08:47:54 np0005533252.novalocal dracut[1285]: Stored kernel commandline:
Nov 24 08:47:54 np0005533252.novalocal dracut[1285]: No dracut internal kernel commandline stored in the initramfs
Nov 24 08:47:54 np0005533252.novalocal dracut[1285]: *** Install squash loader ***
Nov 24 08:47:55 np0005533252.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 08:47:55 np0005533252.novalocal dracut[1285]: *** Squashing the files inside the initramfs ***
Nov 24 08:47:56 np0005533252.novalocal dracut[1285]: *** Squashing the files inside the initramfs done ***
Nov 24 08:47:56 np0005533252.novalocal dracut[1285]: *** Creating image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' ***
Nov 24 08:47:56 np0005533252.novalocal dracut[1285]: *** Hardlinking files ***
Nov 24 08:47:56 np0005533252.novalocal dracut[1285]: Mode:           real
Nov 24 08:47:56 np0005533252.novalocal dracut[1285]: Files:          50
Nov 24 08:47:56 np0005533252.novalocal dracut[1285]: Linked:         0 files
Nov 24 08:47:56 np0005533252.novalocal dracut[1285]: Compared:       0 xattrs
Nov 24 08:47:56 np0005533252.novalocal dracut[1285]: Compared:       0 files
Nov 24 08:47:56 np0005533252.novalocal dracut[1285]: Saved:          0 B
Nov 24 08:47:56 np0005533252.novalocal dracut[1285]: Duration:       0.000412 seconds
Nov 24 08:47:56 np0005533252.novalocal dracut[1285]: *** Hardlinking files done ***
Nov 24 08:47:57 np0005533252.novalocal dracut[1285]: *** Creating initramfs image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' done ***
Nov 24 08:47:57 np0005533252.novalocal kdumpctl[1018]: kdump: kexec: loaded kdump kernel
Nov 24 08:47:57 np0005533252.novalocal kdumpctl[1018]: kdump: Starting kdump: [OK]
Nov 24 08:47:57 np0005533252.novalocal systemd[1]: Finished Crash recovery kernel arming.
Nov 24 08:47:57 np0005533252.novalocal systemd[1]: Startup finished in 1.495s (kernel) + 2.291s (initrd) + 17.479s (userspace) = 21.266s.
Nov 24 08:48:06 np0005533252.novalocal sshd-session[4295]: Accepted publickey for zuul from 38.102.83.114 port 43630 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Nov 24 08:48:06 np0005533252.novalocal systemd[1]: Created slice User Slice of UID 1000.
Nov 24 08:48:06 np0005533252.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 24 08:48:06 np0005533252.novalocal systemd-logind[823]: New session 1 of user zuul.
Nov 24 08:48:06 np0005533252.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 24 08:48:06 np0005533252.novalocal systemd[1]: Starting User Manager for UID 1000...
Nov 24 08:48:06 np0005533252.novalocal systemd[4299]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 08:48:07 np0005533252.novalocal systemd[4299]: Queued start job for default target Main User Target.
Nov 24 08:48:07 np0005533252.novalocal systemd[4299]: Created slice User Application Slice.
Nov 24 08:48:07 np0005533252.novalocal systemd[4299]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 24 08:48:07 np0005533252.novalocal systemd[4299]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 08:48:07 np0005533252.novalocal systemd[4299]: Reached target Paths.
Nov 24 08:48:07 np0005533252.novalocal systemd[4299]: Reached target Timers.
Nov 24 08:48:07 np0005533252.novalocal systemd[4299]: Starting D-Bus User Message Bus Socket...
Nov 24 08:48:07 np0005533252.novalocal systemd[4299]: Starting Create User's Volatile Files and Directories...
Nov 24 08:48:07 np0005533252.novalocal systemd[4299]: Listening on D-Bus User Message Bus Socket.
Nov 24 08:48:07 np0005533252.novalocal systemd[4299]: Reached target Sockets.
Nov 24 08:48:07 np0005533252.novalocal systemd[4299]: Finished Create User's Volatile Files and Directories.
Nov 24 08:48:07 np0005533252.novalocal systemd[4299]: Reached target Basic System.
Nov 24 08:48:07 np0005533252.novalocal systemd[4299]: Reached target Main User Target.
Nov 24 08:48:07 np0005533252.novalocal systemd[4299]: Startup finished in 115ms.
Nov 24 08:48:07 np0005533252.novalocal systemd[1]: Started User Manager for UID 1000.
Nov 24 08:48:07 np0005533252.novalocal systemd[1]: Started Session 1 of User zuul.
Nov 24 08:48:07 np0005533252.novalocal sshd-session[4295]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 08:48:07 np0005533252.novalocal python3[4381]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 08:48:11 np0005533252.novalocal python3[4409]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 08:48:12 np0005533252.novalocal irqbalance[818]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 24 08:48:12 np0005533252.novalocal irqbalance[818]: IRQ 26 affinity is now unmanaged
Nov 24 08:48:13 np0005533252.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 08:48:18 np0005533252.novalocal python3[4469]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 08:48:19 np0005533252.novalocal python3[4509]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 24 08:48:22 np0005533252.novalocal python3[4535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtGjVEb/lsDO7QcEMCxozreHmfSkbPYtukhJN3wVhqpj6xeUXDPmoULx3/bUoF5EPUMcOV3spnCrShHpk7CaLVLFC6oNrQxPD181TchE78zphBpk8I1ehE8T9c7obAmyKrEcACWMj7F602jB1LiYcFYv4jlfDhyW3uTQnip2LICS2Kfa99lM5/ASVfbkov0rOqv+cDcBEhm9XXnUuxfGF0JDXhqv4Moan3wsyDreG2bhonj0B8vCTteeQ78h13an4IV58Xfard0MCw6jIS9DyQLfwpc3OLaKIMe3CC2oVRB77qysEMlCAEihHk42CgdoK8E/tovexbpxYDVKE2PymKN81ObjmT/CgplB54Mo8icraKe+Q1PzX43HsSi20RnipJFuMU33UpP94PO+WoB11gl03bBmluLjuLt4uV5EmciWyTP/feSffjrkuNiIBwXnGakV1+NRH2S8kMbnITAdJAdL3vn8XkYw9FARF1VW6T8Ft+GxeEEJxt8kii/56xDiM= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:22 np0005533252.novalocal python3[4559]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:23 np0005533252.novalocal python3[4658]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 08:48:23 np0005533252.novalocal python3[4729]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763974102.7075222-252-212114450957091/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=7f3765bc298a427e931eb426db28639c_id_rsa follow=False checksum=1ba3cc8ce402543c463affbc560046c840463cbe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:24 np0005533252.novalocal python3[4852]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 08:48:24 np0005533252.novalocal python3[4923]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763974103.7631724-307-182706768082732/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=7f3765bc298a427e931eb426db28639c_id_rsa.pub follow=False checksum=2646cb7a5a0d5a58175bb49a3d139e585d675669 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:25 np0005533252.novalocal python3[4971]: ansible-ping Invoked with data=pong
Nov 24 08:48:26 np0005533252.novalocal python3[4995]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 08:48:29 np0005533252.novalocal python3[5053]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 24 08:48:31 np0005533252.novalocal python3[5085]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:31 np0005533252.novalocal python3[5109]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:31 np0005533252.novalocal python3[5133]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:32 np0005533252.novalocal python3[5157]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:32 np0005533252.novalocal python3[5181]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:32 np0005533252.novalocal python3[5205]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:34 np0005533252.novalocal sudo[5229]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjkmtusrluizswnmnxmmtsbxtnlopajm ; /usr/bin/python3'
Nov 24 08:48:34 np0005533252.novalocal sudo[5229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:48:34 np0005533252.novalocal python3[5231]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:34 np0005533252.novalocal sudo[5229]: pam_unix(sudo:session): session closed for user root
Nov 24 08:48:34 np0005533252.novalocal sudo[5307]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esxyfrispnewkhmdzmyzumsvgngjuudx ; /usr/bin/python3'
Nov 24 08:48:34 np0005533252.novalocal sudo[5307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:48:34 np0005533252.novalocal python3[5309]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 08:48:34 np0005533252.novalocal sudo[5307]: pam_unix(sudo:session): session closed for user root
Nov 24 08:48:35 np0005533252.novalocal sudo[5380]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmyrvsjzemkhubzubdcgqplmgxpiqolm ; /usr/bin/python3'
Nov 24 08:48:35 np0005533252.novalocal sudo[5380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:48:35 np0005533252.novalocal python3[5382]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763974114.5221498-32-224612580941032/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:35 np0005533252.novalocal sudo[5380]: pam_unix(sudo:session): session closed for user root
Nov 24 08:48:36 np0005533252.novalocal python3[5430]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:36 np0005533252.novalocal python3[5454]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:36 np0005533252.novalocal python3[5478]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:36 np0005533252.novalocal python3[5502]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:37 np0005533252.novalocal python3[5526]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:37 np0005533252.novalocal python3[5550]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:37 np0005533252.novalocal python3[5574]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:37 np0005533252.novalocal python3[5598]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:38 np0005533252.novalocal python3[5622]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:38 np0005533252.novalocal python3[5646]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:38 np0005533252.novalocal python3[5670]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:38 np0005533252.novalocal python3[5694]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:39 np0005533252.novalocal python3[5718]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:39 np0005533252.novalocal python3[5742]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:39 np0005533252.novalocal python3[5766]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:39 np0005533252.novalocal python3[5790]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:40 np0005533252.novalocal python3[5814]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:40 np0005533252.novalocal python3[5838]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:40 np0005533252.novalocal python3[5862]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:40 np0005533252.novalocal python3[5886]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:41 np0005533252.novalocal python3[5910]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:41 np0005533252.novalocal python3[5934]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:41 np0005533252.novalocal python3[5958]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:41 np0005533252.novalocal python3[5982]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:42 np0005533252.novalocal python3[6006]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:42 np0005533252.novalocal python3[6030]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:45 np0005533252.novalocal sudo[6054]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrhtejschznnvdhdeebssdoplhntttia ; /usr/bin/python3'
Nov 24 08:48:45 np0005533252.novalocal sudo[6054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:48:45 np0005533252.novalocal python3[6056]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 24 08:48:45 np0005533252.novalocal systemd[1]: Starting Time & Date Service...
Nov 24 08:48:45 np0005533252.novalocal systemd[1]: Started Time & Date Service.
Nov 24 08:48:45 np0005533252.novalocal systemd-timedated[6058]: Changed time zone to 'UTC' (UTC).
Nov 24 08:48:45 np0005533252.novalocal sudo[6054]: pam_unix(sudo:session): session closed for user root
Nov 24 08:48:45 np0005533252.novalocal sudo[6085]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqhjobuicvlyhzzysmcjssvrmkrtovji ; /usr/bin/python3'
Nov 24 08:48:45 np0005533252.novalocal sudo[6085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:48:46 np0005533252.novalocal python3[6087]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:46 np0005533252.novalocal sudo[6085]: pam_unix(sudo:session): session closed for user root
Nov 24 08:48:46 np0005533252.novalocal python3[6163]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 08:48:46 np0005533252.novalocal python3[6234]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1763974126.2427723-252-120902294950723/source _original_basename=tmpmejic5dc follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:47 np0005533252.novalocal python3[6334]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 08:48:47 np0005533252.novalocal python3[6405]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763974126.9830725-302-276646997746038/source _original_basename=tmp1awe8xjc follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:48 np0005533252.novalocal sudo[6505]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rybusjugktryplytdwfnowdakbmemwet ; /usr/bin/python3'
Nov 24 08:48:48 np0005533252.novalocal sudo[6505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:48:48 np0005533252.novalocal python3[6507]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 08:48:48 np0005533252.novalocal sudo[6505]: pam_unix(sudo:session): session closed for user root
Nov 24 08:48:48 np0005533252.novalocal sudo[6578]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvqmqqbsmgirnqnccayoivyjqlvohbyj ; /usr/bin/python3'
Nov 24 08:48:48 np0005533252.novalocal sudo[6578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:48:48 np0005533252.novalocal python3[6580]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763974128.0552998-382-7086504848778/source _original_basename=tmpwbiy5m2i follow=False checksum=f07c805834277da0cbee63ff582683dc2ed910d5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:48 np0005533252.novalocal sudo[6578]: pam_unix(sudo:session): session closed for user root
Nov 24 08:48:49 np0005533252.novalocal python3[6628]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 08:48:49 np0005533252.novalocal python3[6654]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 08:48:49 np0005533252.novalocal sudo[6732]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueixvbrqfalgaqpclsldnmupmoodcjph ; /usr/bin/python3'
Nov 24 08:48:49 np0005533252.novalocal sudo[6732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:48:49 np0005533252.novalocal python3[6734]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 08:48:49 np0005533252.novalocal sudo[6732]: pam_unix(sudo:session): session closed for user root
Nov 24 08:48:50 np0005533252.novalocal sudo[6805]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovfqidpvfcivxuelwjdchouivneoxpbx ; /usr/bin/python3'
Nov 24 08:48:50 np0005533252.novalocal sudo[6805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:48:50 np0005533252.novalocal python3[6807]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1763974129.638929-452-181732642231147/source _original_basename=tmp6lujrq4x follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:50 np0005533252.novalocal sudo[6805]: pam_unix(sudo:session): session closed for user root
Nov 24 08:48:50 np0005533252.novalocal sudo[6856]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulovboppbpyhukjvpyblrqwbksnqafbo ; /usr/bin/python3'
Nov 24 08:48:50 np0005533252.novalocal sudo[6856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:48:50 np0005533252.novalocal python3[6858]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-e0db-79ba-00000000001f-1-compute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 08:48:50 np0005533252.novalocal sudo[6856]: pam_unix(sudo:session): session closed for user root
Nov 24 08:48:51 np0005533252.novalocal python3[6886]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-e0db-79ba-000000000020-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 24 08:48:52 np0005533252.novalocal python3[6914]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:49:12 np0005533252.novalocal sudo[6938]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygwxehpyjwxnqqisrbnfgugghqapwwrp ; /usr/bin/python3'
Nov 24 08:49:12 np0005533252.novalocal sudo[6938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:49:12 np0005533252.novalocal python3[6940]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:49:12 np0005533252.novalocal sudo[6938]: pam_unix(sudo:session): session closed for user root
Nov 24 08:49:15 np0005533252.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 24 08:50:12 np0005533252.novalocal sshd-session[4308]: Received disconnect from 38.102.83.114 port 43630:11: disconnected by user
Nov 24 08:50:12 np0005533252.novalocal sshd-session[4308]: Disconnected from user zuul 38.102.83.114 port 43630
Nov 24 08:50:12 np0005533252.novalocal sshd-session[4295]: pam_unix(sshd:session): session closed for user zuul
Nov 24 08:50:12 np0005533252.novalocal systemd-logind[823]: Session 1 logged out. Waiting for processes to exit.
Nov 24 08:50:20 np0005533252.novalocal systemd[4299]: Starting Mark boot as successful...
Nov 24 08:50:20 np0005533252.novalocal systemd[4299]: Finished Mark boot as successful.
Nov 24 08:50:22 np0005533252.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 24 08:50:22 np0005533252.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 24 08:50:22 np0005533252.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 24 08:50:22 np0005533252.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 24 08:50:22 np0005533252.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 24 08:50:22 np0005533252.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 24 08:50:22 np0005533252.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 24 08:50:22 np0005533252.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 24 08:50:22 np0005533252.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 24 08:50:22 np0005533252.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 24 08:50:22 np0005533252.novalocal NetworkManager[858]: <info>  [1763974222.8217] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 24 08:50:22 np0005533252.novalocal systemd-udevd[6945]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 08:50:22 np0005533252.novalocal NetworkManager[858]: <info>  [1763974222.8356] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 08:50:22 np0005533252.novalocal NetworkManager[858]: <info>  [1763974222.8374] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 24 08:50:22 np0005533252.novalocal NetworkManager[858]: <info>  [1763974222.8377] device (eth1): carrier: link connected
Nov 24 08:50:22 np0005533252.novalocal NetworkManager[858]: <info>  [1763974222.8378] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 24 08:50:22 np0005533252.novalocal NetworkManager[858]: <info>  [1763974222.8384] policy: auto-activating connection 'Wired connection 1' (06cf09d6-5a4c-316f-86b1-330e0eaa7366)
Nov 24 08:50:22 np0005533252.novalocal NetworkManager[858]: <info>  [1763974222.8387] device (eth1): Activation: starting connection 'Wired connection 1' (06cf09d6-5a4c-316f-86b1-330e0eaa7366)
Nov 24 08:50:22 np0005533252.novalocal NetworkManager[858]: <info>  [1763974222.8387] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 08:50:22 np0005533252.novalocal NetworkManager[858]: <info>  [1763974222.8389] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 08:50:22 np0005533252.novalocal NetworkManager[858]: <info>  [1763974222.8391] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 08:50:22 np0005533252.novalocal NetworkManager[858]: <info>  [1763974222.8394] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 24 08:50:23 np0005533252.novalocal sshd-session[6948]: Accepted publickey for zuul from 38.102.83.114 port 46458 ssh2: RSA SHA256:UBnduE29/r4JICQE22jchpBfdroBtCYqENielfKVzAM
Nov 24 08:50:23 np0005533252.novalocal systemd-logind[823]: New session 3 of user zuul.
Nov 24 08:50:23 np0005533252.novalocal systemd[1]: Started Session 3 of User zuul.
Nov 24 08:50:23 np0005533252.novalocal sshd-session[6948]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 08:50:24 np0005533252.novalocal python3[6975]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-0f51-775e-00000000018f-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 08:50:33 np0005533252.novalocal sudo[7053]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmvyzeuemmvcipbjlyddnqhqeodkrmjm ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 24 08:50:33 np0005533252.novalocal sudo[7053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:50:34 np0005533252.novalocal python3[7055]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 08:50:34 np0005533252.novalocal sudo[7053]: pam_unix(sudo:session): session closed for user root
Nov 24 08:50:34 np0005533252.novalocal sudo[7126]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oycbgtqsdvlsglcznfjrcnrfaktavmda ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 24 08:50:34 np0005533252.novalocal sudo[7126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:50:34 np0005533252.novalocal python3[7128]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763974233.7433674-155-265954914046204/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=a8172c646497a56a59ad1405f1e405cb26f97005 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:50:34 np0005533252.novalocal sudo[7126]: pam_unix(sudo:session): session closed for user root
Nov 24 08:50:34 np0005533252.novalocal sudo[7176]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyjkalkjhqoioleeluuebnfnjlwicetk ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 24 08:50:34 np0005533252.novalocal sudo[7176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:50:34 np0005533252.novalocal python3[7178]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 08:50:34 np0005533252.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 24 08:50:34 np0005533252.novalocal systemd[1]: Stopped Network Manager Wait Online.
Nov 24 08:50:34 np0005533252.novalocal systemd[1]: Stopping Network Manager Wait Online...
Nov 24 08:50:34 np0005533252.novalocal systemd[1]: Stopping Network Manager...
Nov 24 08:50:34 np0005533252.novalocal NetworkManager[858]: <info>  [1763974234.9384] caught SIGTERM, shutting down normally.
Nov 24 08:50:34 np0005533252.novalocal NetworkManager[858]: <info>  [1763974234.9395] dhcp4 (eth0): canceled DHCP transaction
Nov 24 08:50:34 np0005533252.novalocal NetworkManager[858]: <info>  [1763974234.9396] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 08:50:34 np0005533252.novalocal NetworkManager[858]: <info>  [1763974234.9396] dhcp4 (eth0): state changed no lease
Nov 24 08:50:34 np0005533252.novalocal NetworkManager[858]: <info>  [1763974234.9397] manager: NetworkManager state is now CONNECTING
Nov 24 08:50:34 np0005533252.novalocal NetworkManager[858]: <info>  [1763974234.9535] dhcp4 (eth1): canceled DHCP transaction
Nov 24 08:50:34 np0005533252.novalocal NetworkManager[858]: <info>  [1763974234.9535] dhcp4 (eth1): state changed no lease
Nov 24 08:50:34 np0005533252.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 08:50:34 np0005533252.novalocal NetworkManager[858]: <info>  [1763974234.9612] exiting (success)
Nov 24 08:50:34 np0005533252.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 08:50:34 np0005533252.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 24 08:50:34 np0005533252.novalocal systemd[1]: Stopped Network Manager.
Nov 24 08:50:34 np0005533252.novalocal systemd[1]: Starting Network Manager...
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0093] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:e3886539-ea72-4427-b33b-0060f8fadd32)
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0095] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0144] manager[0x559987aa7070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 24 08:50:35 np0005533252.novalocal systemd[1]: Starting Hostname Service...
Nov 24 08:50:35 np0005533252.novalocal systemd[1]: Started Hostname Service.
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0721] hostname: hostname: using hostnamed
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0721] hostname: static hostname changed from (none) to "np0005533252.novalocal"
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0724] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0728] manager[0x559987aa7070]: rfkill: Wi-Fi hardware radio set enabled
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0728] manager[0x559987aa7070]: rfkill: WWAN hardware radio set enabled
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0752] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0752] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0753] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0754] manager: Networking is enabled by state file
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0756] settings: Loaded settings plugin: keyfile (internal)
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0759] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0785] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0793] dhcp: init: Using DHCP client 'internal'
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0795] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0801] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0806] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0813] device (lo): Activation: starting connection 'lo' (3dc9a73f-5008-4d54-b1f5-ae0263930821)
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0819] device (eth0): carrier: link connected
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0823] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0827] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0828] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0833] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0838] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0842] device (eth1): carrier: link connected
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0846] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0849] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (06cf09d6-5a4c-316f-86b1-330e0eaa7366) (indicated)
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0849] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0853] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0858] device (eth1): Activation: starting connection 'Wired connection 1' (06cf09d6-5a4c-316f-86b1-330e0eaa7366)
Nov 24 08:50:35 np0005533252.novalocal systemd[1]: Started Network Manager.
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0864] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0871] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0873] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0874] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0877] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0879] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0880] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0883] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0885] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0891] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0894] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0901] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0903] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0920] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0922] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0926] device (lo): Activation: successful, device activated.
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0932] dhcp4 (eth0): state changed new lease, address=38.129.56.228
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0937] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.0998] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 08:50:35 np0005533252.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.1059] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.1062] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.1065] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.1070] device (eth0): Activation: successful, device activated.
Nov 24 08:50:35 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974235.1076] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 24 08:50:35 np0005533252.novalocal sudo[7176]: pam_unix(sudo:session): session closed for user root
Nov 24 08:50:35 np0005533252.novalocal python3[7262]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-0f51-775e-0000000000c8-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 08:50:45 np0005533252.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 08:51:05 np0005533252.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 08:51:06 np0005533252.novalocal chronyd[831]: Selected source 216.197.156.83 (2.centos.pool.ntp.org)
Nov 24 08:51:20 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974280.3918] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 08:51:20 np0005533252.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 08:51:20 np0005533252.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 08:51:20 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974280.4153] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 08:51:20 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974280.4156] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 08:51:20 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974280.4165] device (eth1): Activation: successful, device activated.
Nov 24 08:51:20 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974280.4175] manager: startup complete
Nov 24 08:51:20 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974280.4179] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 24 08:51:20 np0005533252.novalocal NetworkManager[7190]: <warn>  [1763974280.4188] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 24 08:51:20 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974280.4198] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 24 08:51:20 np0005533252.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 24 08:51:20 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974280.4275] dhcp4 (eth1): canceled DHCP transaction
Nov 24 08:51:20 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974280.4275] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 24 08:51:20 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974280.4276] dhcp4 (eth1): state changed no lease
Nov 24 08:51:20 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974280.4294] policy: auto-activating connection 'ci-private-network' (eed6ff3f-ed68-533f-b181-f50564eca501)
Nov 24 08:51:20 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974280.4299] device (eth1): Activation: starting connection 'ci-private-network' (eed6ff3f-ed68-533f-b181-f50564eca501)
Nov 24 08:51:20 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974280.4300] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 08:51:20 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974280.4303] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 08:51:20 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974280.4312] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 08:51:20 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974280.4320] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 08:51:20 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974280.4368] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 08:51:20 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974280.4370] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 08:51:20 np0005533252.novalocal NetworkManager[7190]: <info>  [1763974280.4375] device (eth1): Activation: successful, device activated.
Nov 24 08:51:30 np0005533252.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 08:51:35 np0005533252.novalocal sshd-session[6951]: Received disconnect from 38.102.83.114 port 46458:11: disconnected by user
Nov 24 08:51:35 np0005533252.novalocal sshd-session[6951]: Disconnected from user zuul 38.102.83.114 port 46458
Nov 24 08:51:35 np0005533252.novalocal sshd-session[6948]: pam_unix(sshd:session): session closed for user zuul
Nov 24 08:51:35 np0005533252.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Nov 24 08:51:35 np0005533252.novalocal systemd[1]: session-3.scope: Consumed 1.388s CPU time.
Nov 24 08:51:35 np0005533252.novalocal systemd-logind[823]: Session 3 logged out. Waiting for processes to exit.
Nov 24 08:51:35 np0005533252.novalocal systemd-logind[823]: Removed session 3.
Nov 24 08:52:31 np0005533252.novalocal sshd-session[7291]: Accepted publickey for zuul from 38.102.83.114 port 59430 ssh2: RSA SHA256:UBnduE29/r4JICQE22jchpBfdroBtCYqENielfKVzAM
Nov 24 08:52:31 np0005533252.novalocal systemd-logind[823]: New session 4 of user zuul.
Nov 24 08:52:31 np0005533252.novalocal systemd[1]: Started Session 4 of User zuul.
Nov 24 08:52:31 np0005533252.novalocal sshd-session[7291]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 08:52:31 np0005533252.novalocal sudo[7370]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vazsjdhwifasrvgwipbwbllynehxgtve ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 24 08:52:31 np0005533252.novalocal sudo[7370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:52:31 np0005533252.novalocal python3[7372]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 08:52:32 np0005533252.novalocal sudo[7370]: pam_unix(sudo:session): session closed for user root
Nov 24 08:52:32 np0005533252.novalocal sudo[7443]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqccdkooybumgjluglqdcjgbblukzrbq ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 24 08:52:32 np0005533252.novalocal sudo[7443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:52:32 np0005533252.novalocal python3[7445]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763974351.7055833-373-3622969103097/source _original_basename=tmpd772uug3 follow=False checksum=a3ebf95cc3e4718aba4e7a218d4b9424c08a2ec8 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:52:32 np0005533252.novalocal sudo[7443]: pam_unix(sudo:session): session closed for user root
Nov 24 08:52:35 np0005533252.novalocal sshd-session[7294]: Connection closed by 38.102.83.114 port 59430
Nov 24 08:52:35 np0005533252.novalocal sshd-session[7291]: pam_unix(sshd:session): session closed for user zuul
Nov 24 08:52:35 np0005533252.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Nov 24 08:52:35 np0005533252.novalocal systemd-logind[823]: Session 4 logged out. Waiting for processes to exit.
Nov 24 08:52:35 np0005533252.novalocal systemd-logind[823]: Removed session 4.
Nov 24 08:53:20 np0005533252.novalocal systemd[4299]: Created slice User Background Tasks Slice.
Nov 24 08:53:20 np0005533252.novalocal systemd[4299]: Starting Cleanup of User's Temporary Files and Directories...
Nov 24 08:53:20 np0005533252.novalocal systemd[4299]: Finished Cleanup of User's Temporary Files and Directories.
Nov 24 08:57:49 np0005533252.novalocal sshd-session[7476]: Accepted publickey for zuul from 38.102.83.114 port 44786 ssh2: RSA SHA256:UBnduE29/r4JICQE22jchpBfdroBtCYqENielfKVzAM
Nov 24 08:57:49 np0005533252.novalocal systemd-logind[823]: New session 5 of user zuul.
Nov 24 08:57:49 np0005533252.novalocal systemd[1]: Started Session 5 of User zuul.
Nov 24 08:57:50 np0005533252.novalocal sshd-session[7476]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 08:57:50 np0005533252.novalocal sudo[7503]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoozkskxnkotqncghbhmgzuqwncjxiij ; /usr/bin/python3'
Nov 24 08:57:50 np0005533252.novalocal sudo[7503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:50 np0005533252.novalocal python3[7505]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-be4d-b146-000000001cd2-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 08:57:50 np0005533252.novalocal sudo[7503]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:50 np0005533252.novalocal sudo[7531]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uztbnmtkrfbgkpctqspgptsozxesxsdv ; /usr/bin/python3'
Nov 24 08:57:50 np0005533252.novalocal sudo[7531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:50 np0005533252.novalocal python3[7533]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:57:50 np0005533252.novalocal sudo[7531]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:50 np0005533252.novalocal sudo[7558]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piuwcjwbdunrqstsqdrylrvmvuveohls ; /usr/bin/python3'
Nov 24 08:57:50 np0005533252.novalocal sudo[7558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:50 np0005533252.novalocal python3[7560]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:57:50 np0005533252.novalocal sudo[7558]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:51 np0005533252.novalocal sudo[7584]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucmtkmaprlfnfnccxgwufubbqoogtcer ; /usr/bin/python3'
Nov 24 08:57:51 np0005533252.novalocal sudo[7584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:51 np0005533252.novalocal python3[7586]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:57:51 np0005533252.novalocal sudo[7584]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:51 np0005533252.novalocal sudo[7610]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbxljtntmzwrlrxckjykwmgbpbgmwdur ; /usr/bin/python3'
Nov 24 08:57:51 np0005533252.novalocal sudo[7610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:51 np0005533252.novalocal python3[7612]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:57:51 np0005533252.novalocal sudo[7610]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:52 np0005533252.novalocal sudo[7636]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcjwlfuhwnbwlzglsqqfbmbasztdfgts ; /usr/bin/python3'
Nov 24 08:57:52 np0005533252.novalocal sudo[7636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:52 np0005533252.novalocal python3[7638]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:57:52 np0005533252.novalocal sudo[7636]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:52 np0005533252.novalocal sudo[7714]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-funwbsgrmuautlvhjykirfkfmobxrxpj ; /usr/bin/python3'
Nov 24 08:57:52 np0005533252.novalocal sudo[7714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:52 np0005533252.novalocal python3[7716]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 08:57:52 np0005533252.novalocal sudo[7714]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:53 np0005533252.novalocal sudo[7787]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuetkseqmrhdcqafvrmhdvritymrnzyb ; /usr/bin/python3'
Nov 24 08:57:53 np0005533252.novalocal sudo[7787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:53 np0005533252.novalocal python3[7789]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763974672.7094579-509-24398722470962/source _original_basename=tmpzak2sfrq follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:57:53 np0005533252.novalocal sudo[7787]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:54 np0005533252.novalocal sudo[7837]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrxuafmlpzmvtapurfhrrmiolpznohid ; /usr/bin/python3'
Nov 24 08:57:54 np0005533252.novalocal sudo[7837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:54 np0005533252.novalocal python3[7839]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 08:57:54 np0005533252.novalocal systemd[1]: Reloading.
Nov 24 08:57:54 np0005533252.novalocal systemd-rc-local-generator[7860]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 08:57:54 np0005533252.novalocal sudo[7837]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:56 np0005533252.novalocal sudo[7893]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkpqembzsgivxoqzhutavpeoaguedchk ; /usr/bin/python3'
Nov 24 08:57:56 np0005533252.novalocal sudo[7893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:56 np0005533252.novalocal python3[7895]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 24 08:57:56 np0005533252.novalocal sudo[7893]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:56 np0005533252.novalocal sudo[7919]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwopfxfczxkbltsjidqngzboxhfjduwa ; /usr/bin/python3'
Nov 24 08:57:56 np0005533252.novalocal sudo[7919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:56 np0005533252.novalocal python3[7921]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 08:57:56 np0005533252.novalocal sudo[7919]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:56 np0005533252.novalocal sudo[7947]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttfwsdwqvjazwqryzggctukstzihumrf ; /usr/bin/python3'
Nov 24 08:57:56 np0005533252.novalocal sudo[7947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:56 np0005533252.novalocal python3[7949]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 08:57:56 np0005533252.novalocal sudo[7947]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:56 np0005533252.novalocal sudo[7975]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aufzmhflbodjbrmmywbyuoujgutzcapv ; /usr/bin/python3'
Nov 24 08:57:56 np0005533252.novalocal sudo[7975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:57 np0005533252.novalocal python3[7977]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 08:57:57 np0005533252.novalocal sudo[7975]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:57 np0005533252.novalocal sudo[8003]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-safuhxlbgaqcgcnxrtsznucjmchjtdbf ; /usr/bin/python3'
Nov 24 08:57:57 np0005533252.novalocal sudo[8003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:57 np0005533252.novalocal python3[8005]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 08:57:57 np0005533252.novalocal sudo[8003]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:58 np0005533252.novalocal python3[8032]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-be4d-b146-000000001cd9-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 08:57:58 np0005533252.novalocal python3[8062]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 08:58:01 np0005533252.novalocal sshd-session[7479]: Connection closed by 38.102.83.114 port 44786
Nov 24 08:58:01 np0005533252.novalocal sshd-session[7476]: pam_unix(sshd:session): session closed for user zuul
Nov 24 08:58:01 np0005533252.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Nov 24 08:58:01 np0005533252.novalocal systemd[1]: session-5.scope: Consumed 3.821s CPU time.
Nov 24 08:58:01 np0005533252.novalocal systemd-logind[823]: Session 5 logged out. Waiting for processes to exit.
Nov 24 08:58:01 np0005533252.novalocal systemd-logind[823]: Removed session 5.
Nov 24 08:58:03 np0005533252.novalocal sshd-session[8067]: Accepted publickey for zuul from 38.102.83.114 port 57022 ssh2: RSA SHA256:UBnduE29/r4JICQE22jchpBfdroBtCYqENielfKVzAM
Nov 24 08:58:03 np0005533252.novalocal systemd-logind[823]: New session 6 of user zuul.
Nov 24 08:58:03 np0005533252.novalocal systemd[1]: Started Session 6 of User zuul.
Nov 24 08:58:03 np0005533252.novalocal sshd-session[8067]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 08:58:03 np0005533252.novalocal sudo[8094]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nolvcvkslmhefkwrwoqgltdgnlrtbuwx ; /usr/bin/python3'
Nov 24 08:58:03 np0005533252.novalocal sudo[8094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:58:03 np0005533252.novalocal python3[8096]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 24 08:58:17 np0005533252.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 24 08:58:17 np0005533252.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 08:58:17 np0005533252.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 24 08:58:17 np0005533252.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 08:58:17 np0005533252.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 24 08:58:17 np0005533252.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 08:58:17 np0005533252.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 08:58:17 np0005533252.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 08:58:26 np0005533252.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 24 08:58:26 np0005533252.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 08:58:26 np0005533252.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 24 08:58:26 np0005533252.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 08:58:26 np0005533252.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 24 08:58:26 np0005533252.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 08:58:26 np0005533252.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 08:58:26 np0005533252.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 08:58:35 np0005533252.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 24 08:58:35 np0005533252.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 08:58:35 np0005533252.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 24 08:58:35 np0005533252.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 08:58:35 np0005533252.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 24 08:58:35 np0005533252.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 08:58:35 np0005533252.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 08:58:35 np0005533252.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 08:58:36 np0005533252.novalocal setsebool[8164]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 24 08:58:36 np0005533252.novalocal setsebool[8164]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 24 08:58:47 np0005533252.novalocal kernel: SELinux:  Converting 388 SID table entries...
Nov 24 08:58:47 np0005533252.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 08:58:47 np0005533252.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 24 08:58:47 np0005533252.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 08:58:47 np0005533252.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 24 08:58:47 np0005533252.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 08:58:47 np0005533252.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 08:58:47 np0005533252.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 08:59:05 np0005533252.novalocal dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 24 08:59:05 np0005533252.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 08:59:05 np0005533252.novalocal systemd[1]: Starting man-db-cache-update.service...
Nov 24 08:59:05 np0005533252.novalocal systemd[1]: Reloading.
Nov 24 08:59:05 np0005533252.novalocal systemd-rc-local-generator[8918]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 08:59:05 np0005533252.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 08:59:06 np0005533252.novalocal sudo[8094]: pam_unix(sudo:session): session closed for user root
Nov 24 08:59:42 np0005533252.novalocal irqbalance[818]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 24 08:59:42 np0005533252.novalocal irqbalance[818]: IRQ 27 affinity is now unmanaged
Nov 24 08:59:44 np0005533252.novalocal systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 08:59:44 np0005533252.novalocal systemd[1]: Finished man-db-cache-update.service.
Nov 24 08:59:44 np0005533252.novalocal systemd[1]: man-db-cache-update.service: Consumed 45.952s CPU time.
Nov 24 08:59:44 np0005533252.novalocal systemd[1]: run-r03dbc781c11649fba267c33d387bf279.service: Deactivated successfully.
Nov 24 08:59:59 np0005533252.novalocal python3[29485]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ec2-ffbe-41c3-2628-00000000000c-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:00:00 np0005533252.novalocal kernel: evm: overlay not supported
Nov 24 09:00:00 np0005533252.novalocal systemd[4299]: Starting D-Bus User Message Bus...
Nov 24 09:00:00 np0005533252.novalocal dbus-broker-launch[29544]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 24 09:00:00 np0005533252.novalocal dbus-broker-launch[29544]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 24 09:00:00 np0005533252.novalocal systemd[4299]: Started D-Bus User Message Bus.
Nov 24 09:00:00 np0005533252.novalocal dbus-broker-lau[29544]: Ready
Nov 24 09:00:00 np0005533252.novalocal systemd[4299]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 24 09:00:00 np0005533252.novalocal systemd[4299]: Created slice Slice /user.
Nov 24 09:00:00 np0005533252.novalocal systemd[4299]: podman-29524.scope: unit configures an IP firewall, but not running as root.
Nov 24 09:00:00 np0005533252.novalocal systemd[4299]: (This warning is only shown for the first unit using IP firewalling.)
Nov 24 09:00:00 np0005533252.novalocal systemd[4299]: Started podman-29524.scope.
Nov 24 09:00:00 np0005533252.novalocal systemd[4299]: Started podman-pause-c09f1c97.scope.
Nov 24 09:00:01 np0005533252.novalocal sudo[29570]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgffjoztbgpsrihhclbxcblxceetalma ; /usr/bin/python3'
Nov 24 09:00:01 np0005533252.novalocal sudo[29570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:00:01 np0005533252.novalocal python3[29572]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.129.56.16:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.129.56.16:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:00:01 np0005533252.novalocal python3[29572]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 24 09:00:01 np0005533252.novalocal sudo[29570]: pam_unix(sudo:session): session closed for user root
Nov 24 09:00:02 np0005533252.novalocal sshd-session[8070]: Connection closed by 38.102.83.114 port 57022
Nov 24 09:00:02 np0005533252.novalocal sshd-session[8067]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:00:02 np0005533252.novalocal systemd[1]: session-6.scope: Deactivated successfully.
Nov 24 09:00:02 np0005533252.novalocal systemd[1]: session-6.scope: Consumed 58.495s CPU time.
Nov 24 09:00:02 np0005533252.novalocal systemd-logind[823]: Session 6 logged out. Waiting for processes to exit.
Nov 24 09:00:02 np0005533252.novalocal systemd-logind[823]: Removed session 6.
Nov 24 09:00:20 np0005533252.novalocal systemd[1]: Starting dnf makecache...
Nov 24 09:00:20 np0005533252.novalocal dnf[29573]: Failed determining last makecache time.
Nov 24 09:00:20 np0005533252.novalocal dnf[29573]: CentOS Stream 9 - BaseOS                         27 kB/s | 7.3 kB     00:00
Nov 24 09:00:20 np0005533252.novalocal dnf[29573]: CentOS Stream 9 - AppStream                      77 kB/s | 7.4 kB     00:00
Nov 24 09:00:21 np0005533252.novalocal dnf[29573]: CentOS Stream 9 - CRB                            76 kB/s | 7.2 kB     00:00
Nov 24 09:00:21 np0005533252.novalocal dnf[29573]: CentOS Stream 9 - Extras packages                26 kB/s | 8.3 kB     00:00
Nov 24 09:00:21 np0005533252.novalocal dnf[29573]: Metadata cache created.
Nov 24 09:00:21 np0005533252.novalocal systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 24 09:00:21 np0005533252.novalocal systemd[1]: Finished dnf makecache.
Nov 24 09:00:22 np0005533252.novalocal sshd-session[29580]: Connection closed by 38.129.56.127 port 53058 [preauth]
Nov 24 09:00:22 np0005533252.novalocal sshd-session[29582]: Connection closed by 38.129.56.127 port 53062 [preauth]
Nov 24 09:00:22 np0005533252.novalocal sshd-session[29581]: Unable to negotiate with 38.129.56.127 port 53070: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 24 09:00:22 np0005533252.novalocal sshd-session[29579]: Unable to negotiate with 38.129.56.127 port 53076: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 24 09:00:22 np0005533252.novalocal sshd-session[29578]: Unable to negotiate with 38.129.56.127 port 53092: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 24 09:00:27 np0005533252.novalocal sshd-session[29589]: Accepted publickey for zuul from 38.102.83.114 port 37354 ssh2: RSA SHA256:UBnduE29/r4JICQE22jchpBfdroBtCYqENielfKVzAM
Nov 24 09:00:27 np0005533252.novalocal systemd-logind[823]: New session 7 of user zuul.
Nov 24 09:00:27 np0005533252.novalocal systemd[1]: Started Session 7 of User zuul.
Nov 24 09:00:27 np0005533252.novalocal sshd-session[29589]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:00:27 np0005533252.novalocal python3[29616]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM8ruDbV0dT4f7otSS9ZkwTivv+VvdZBI90ZFtvHB0fKKCNPoKXMGfWx38kL9Jgkrr0hEGTFtsoY+YwwXpMooGE= zuul@np0005533250.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 09:00:27 np0005533252.novalocal sudo[29640]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfqowbhoyjiciqlislhmtfacrakqohob ; /usr/bin/python3'
Nov 24 09:00:27 np0005533252.novalocal sudo[29640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:00:27 np0005533252.novalocal python3[29642]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM8ruDbV0dT4f7otSS9ZkwTivv+VvdZBI90ZFtvHB0fKKCNPoKXMGfWx38kL9Jgkrr0hEGTFtsoY+YwwXpMooGE= zuul@np0005533250.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 09:00:27 np0005533252.novalocal sudo[29640]: pam_unix(sudo:session): session closed for user root
Nov 24 09:00:28 np0005533252.novalocal sudo[29666]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-levwlfrecxjlkywrmzdahjujscgcekli ; /usr/bin/python3'
Nov 24 09:00:28 np0005533252.novalocal sudo[29666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:00:28 np0005533252.novalocal python3[29668]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005533252.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 24 09:00:28 np0005533252.novalocal useradd[29670]: new group: name=cloud-admin, GID=1002
Nov 24 09:00:28 np0005533252.novalocal useradd[29670]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Nov 24 09:00:28 np0005533252.novalocal sudo[29666]: pam_unix(sudo:session): session closed for user root
Nov 24 09:00:29 np0005533252.novalocal sudo[29700]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecvykwmtjjxalfefqltjetzcqflvqgou ; /usr/bin/python3'
Nov 24 09:00:29 np0005533252.novalocal sudo[29700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:00:29 np0005533252.novalocal python3[29702]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM8ruDbV0dT4f7otSS9ZkwTivv+VvdZBI90ZFtvHB0fKKCNPoKXMGfWx38kL9Jgkrr0hEGTFtsoY+YwwXpMooGE= zuul@np0005533250.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 09:00:29 np0005533252.novalocal sudo[29700]: pam_unix(sudo:session): session closed for user root
Nov 24 09:00:29 np0005533252.novalocal sudo[29778]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khbsswhibjbamyqahrsjvjearmzqades ; /usr/bin/python3'
Nov 24 09:00:29 np0005533252.novalocal sudo[29778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:00:29 np0005533252.novalocal python3[29780]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 09:00:29 np0005533252.novalocal sudo[29778]: pam_unix(sudo:session): session closed for user root
Nov 24 09:00:30 np0005533252.novalocal sudo[29851]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euvvdetygkhezhybpwbchfgsyljepoyq ; /usr/bin/python3'
Nov 24 09:00:30 np0005533252.novalocal sudo[29851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:00:30 np0005533252.novalocal python3[29853]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763974829.3883047-169-193705438020268/source _original_basename=tmp3ujkehpw follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:00:30 np0005533252.novalocal sudo[29851]: pam_unix(sudo:session): session closed for user root
Nov 24 09:00:30 np0005533252.novalocal sudo[29901]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgounasmujumumjpyjmkyddclsgisjxv ; /usr/bin/python3'
Nov 24 09:00:30 np0005533252.novalocal sudo[29901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:00:31 np0005533252.novalocal python3[29903]: ansible-ansible.builtin.hostname Invoked with name=compute-1 use=systemd
Nov 24 09:00:31 np0005533252.novalocal systemd[1]: Starting Hostname Service...
Nov 24 09:00:31 np0005533252.novalocal systemd[1]: Started Hostname Service.
Nov 24 09:00:31 np0005533252.novalocal systemd-hostnamed[29907]: Changed pretty hostname to 'compute-1'
Nov 24 09:00:31 compute-1 systemd-hostnamed[29907]: Hostname set to <compute-1> (static)
Nov 24 09:00:31 compute-1 NetworkManager[7190]: <info>  [1763974831.2432] hostname: static hostname changed from "np0005533252.novalocal" to "compute-1"
Nov 24 09:00:31 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 09:00:31 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 09:00:31 compute-1 sudo[29901]: pam_unix(sudo:session): session closed for user root
Nov 24 09:00:31 compute-1 sshd-session[29592]: Connection closed by 38.102.83.114 port 37354
Nov 24 09:00:31 compute-1 sshd-session[29589]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:00:31 compute-1 systemd[1]: session-7.scope: Deactivated successfully.
Nov 24 09:00:31 compute-1 systemd[1]: session-7.scope: Consumed 2.340s CPU time.
Nov 24 09:00:31 compute-1 systemd-logind[823]: Session 7 logged out. Waiting for processes to exit.
Nov 24 09:00:31 compute-1 systemd-logind[823]: Removed session 7.
Nov 24 09:00:41 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 09:01:01 compute-1 CROND[29922]: (root) CMD (run-parts /etc/cron.hourly)
Nov 24 09:01:01 compute-1 run-parts[29925]: (/etc/cron.hourly) starting 0anacron
Nov 24 09:01:01 compute-1 anacron[29933]: Anacron started on 2025-11-24
Nov 24 09:01:01 compute-1 anacron[29933]: Will run job `cron.daily' in 11 min.
Nov 24 09:01:01 compute-1 anacron[29933]: Will run job `cron.weekly' in 31 min.
Nov 24 09:01:01 compute-1 anacron[29933]: Will run job `cron.monthly' in 51 min.
Nov 24 09:01:01 compute-1 anacron[29933]: Jobs will be executed sequentially
Nov 24 09:01:01 compute-1 run-parts[29935]: (/etc/cron.hourly) finished 0anacron
Nov 24 09:01:01 compute-1 CROND[29921]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 24 09:01:01 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 09:02:53 compute-1 sshd-session[29943]: Connection closed by 172.236.228.220 port 39720 [preauth]
Nov 24 09:02:53 compute-1 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 24 09:02:53 compute-1 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 24 09:02:53 compute-1 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 24 09:02:53 compute-1 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 24 09:02:53 compute-1 sshd-session[29947]: Connection closed by 172.236.228.220 port 39722 [preauth]
Nov 24 09:02:54 compute-1 sshd-session[29949]: Unable to negotiate with 172.236.228.220 port 20430: no matching host key type found. Their offer: ssh-ed25519-cert-v01@openssh.com,ssh-ed25519 [preauth]
Nov 24 09:04:28 compute-1 sshd-session[29952]: Accepted publickey for zuul from 38.129.56.127 port 48790 ssh2: RSA SHA256:UBnduE29/r4JICQE22jchpBfdroBtCYqENielfKVzAM
Nov 24 09:04:28 compute-1 systemd-logind[823]: New session 8 of user zuul.
Nov 24 09:04:28 compute-1 systemd[1]: Started Session 8 of User zuul.
Nov 24 09:04:28 compute-1 sshd-session[29952]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:04:28 compute-1 python3[30028]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:04:30 compute-1 sudo[30142]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrtgefpvskmatarejszlkcfonzhzuufz ; /usr/bin/python3'
Nov 24 09:04:30 compute-1 sudo[30142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:30 compute-1 python3[30144]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 09:04:30 compute-1 sudo[30142]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:31 compute-1 sudo[30215]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qelcuuhnbtzhycssdgmbygcwsxnpswer ; /usr/bin/python3'
Nov 24 09:04:31 compute-1 sudo[30215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:31 compute-1 python3[30217]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763975070.5090714-33950-110262530512426/source mode=0755 _original_basename=delorean.repo follow=False checksum=1830be8248976a7f714fb01ca8550e92dfc79ad2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:04:31 compute-1 sudo[30215]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:31 compute-1 sudo[30241]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wajqmflvifcsuiawhbxgktorxbopcoyb ; /usr/bin/python3'
Nov 24 09:04:31 compute-1 sudo[30241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:31 compute-1 python3[30243]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 09:04:31 compute-1 sudo[30241]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:31 compute-1 sudo[30314]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbwbvcpqltudyjshlnyousgiuzwfcvap ; /usr/bin/python3'
Nov 24 09:04:31 compute-1 sudo[30314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:31 compute-1 python3[30316]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763975070.5090714-33950-110262530512426/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:04:31 compute-1 sudo[30314]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:31 compute-1 sudo[30340]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucifwimztacnkprgvcxejuokixolvrxh ; /usr/bin/python3'
Nov 24 09:04:31 compute-1 sudo[30340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:31 compute-1 python3[30342]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 09:04:31 compute-1 sudo[30340]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:32 compute-1 sudo[30413]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbnkyeefynpmrjcxiqltkilvuoxqhhuf ; /usr/bin/python3'
Nov 24 09:04:32 compute-1 sudo[30413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:32 compute-1 python3[30415]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763975070.5090714-33950-110262530512426/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:04:32 compute-1 sudo[30413]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:32 compute-1 sudo[30439]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dubyfeyclhwsdcqbicpxwpmylzvtnsze ; /usr/bin/python3'
Nov 24 09:04:32 compute-1 sudo[30439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:32 compute-1 python3[30441]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 09:04:32 compute-1 sudo[30439]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:32 compute-1 sudo[30512]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdtxnrfqilrgbozscyzxdfixjtjztqji ; /usr/bin/python3'
Nov 24 09:04:32 compute-1 sudo[30512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:32 compute-1 python3[30514]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763975070.5090714-33950-110262530512426/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:04:32 compute-1 sudo[30512]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:32 compute-1 sudo[30538]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbvxmenjnhavajsqqzrfkrgjjsnbvgyp ; /usr/bin/python3'
Nov 24 09:04:32 compute-1 sudo[30538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:33 compute-1 python3[30540]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 09:04:33 compute-1 sudo[30538]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:33 compute-1 sudo[30611]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgxwjmojbhjgwgvxxcwvktguqhvlnrfe ; /usr/bin/python3'
Nov 24 09:04:33 compute-1 sudo[30611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:33 compute-1 python3[30613]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763975070.5090714-33950-110262530512426/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:04:33 compute-1 sudo[30611]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:33 compute-1 sudo[30637]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dirffvtjftealtuqivpdccukxrheqeye ; /usr/bin/python3'
Nov 24 09:04:33 compute-1 sudo[30637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:33 compute-1 python3[30639]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 09:04:33 compute-1 sudo[30637]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:33 compute-1 sudo[30710]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihgnopoieukdpcxyftifcgsaweokaoqb ; /usr/bin/python3'
Nov 24 09:04:33 compute-1 sudo[30710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:33 compute-1 python3[30712]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763975070.5090714-33950-110262530512426/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:04:33 compute-1 sudo[30710]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:34 compute-1 sudo[30736]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wafmvfaftprsarqdleyunspgchdtneog ; /usr/bin/python3'
Nov 24 09:04:34 compute-1 sudo[30736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:34 compute-1 python3[30738]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 09:04:34 compute-1 sudo[30736]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:34 compute-1 sudo[30809]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnhzklxxwzgyrkqfczxvffcvolpxciip ; /usr/bin/python3'
Nov 24 09:04:34 compute-1 sudo[30809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:34 compute-1 python3[30811]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763975070.5090714-33950-110262530512426/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:04:34 compute-1 sudo[30809]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:47 compute-1 python3[30859]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:09:47 compute-1 sshd-session[29955]: Received disconnect from 38.129.56.127 port 48790:11: disconnected by user
Nov 24 09:09:47 compute-1 sshd-session[29955]: Disconnected from user zuul 38.129.56.127 port 48790
Nov 24 09:09:47 compute-1 sshd-session[29952]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:09:47 compute-1 systemd[1]: session-8.scope: Deactivated successfully.
Nov 24 09:09:47 compute-1 systemd[1]: session-8.scope: Consumed 4.431s CPU time.
Nov 24 09:09:47 compute-1 systemd-logind[823]: Session 8 logged out. Waiting for processes to exit.
Nov 24 09:09:47 compute-1 systemd-logind[823]: Removed session 8.
Nov 24 09:12:01 compute-1 anacron[29933]: Job `cron.daily' started
Nov 24 09:12:01 compute-1 anacron[29933]: Job `cron.daily' terminated
Nov 24 09:16:27 compute-1 sshd-session[30867]: Accepted publickey for zuul from 192.168.122.30 port 51014 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:16:27 compute-1 systemd-logind[823]: New session 9 of user zuul.
Nov 24 09:16:27 compute-1 systemd[1]: Started Session 9 of User zuul.
Nov 24 09:16:27 compute-1 sshd-session[30867]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:16:28 compute-1 python3.9[31020]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:16:29 compute-1 sudo[31199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tevosuddcnyujjmdwfueszxbjkmvscnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975789.4459405-57-93353785622411/AnsiballZ_command.py'
Nov 24 09:16:29 compute-1 sudo[31199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:16:30 compute-1 python3.9[31201]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:16:36 compute-1 sudo[31199]: pam_unix(sudo:session): session closed for user root
Nov 24 09:16:37 compute-1 sshd-session[30870]: Connection closed by 192.168.122.30 port 51014
Nov 24 09:16:37 compute-1 sshd-session[30867]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:16:37 compute-1 systemd[1]: session-9.scope: Deactivated successfully.
Nov 24 09:16:37 compute-1 systemd[1]: session-9.scope: Consumed 7.584s CPU time.
Nov 24 09:16:37 compute-1 systemd-logind[823]: Session 9 logged out. Waiting for processes to exit.
Nov 24 09:16:37 compute-1 systemd-logind[823]: Removed session 9.
Nov 24 09:16:52 compute-1 sshd-session[31260]: Accepted publickey for zuul from 192.168.122.30 port 35274 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:16:52 compute-1 systemd-logind[823]: New session 10 of user zuul.
Nov 24 09:16:52 compute-1 systemd[1]: Started Session 10 of User zuul.
Nov 24 09:16:52 compute-1 sshd-session[31260]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:16:53 compute-1 python3.9[31413]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 24 09:16:54 compute-1 python3.9[31587]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:16:56 compute-1 sudo[31737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnyqgzitzirrbkrulwyqxtnsnuqsmyrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975815.26201-94-107407594040030/AnsiballZ_command.py'
Nov 24 09:16:56 compute-1 sudo[31737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:16:56 compute-1 python3.9[31739]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:16:56 compute-1 sudo[31737]: pam_unix(sudo:session): session closed for user root
Nov 24 09:16:57 compute-1 sudo[31891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmtmagvuqjhtbklkrtiepvuqtrzudrui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975816.7850642-130-69396099347088/AnsiballZ_stat.py'
Nov 24 09:16:57 compute-1 sudo[31891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:16:57 compute-1 python3.9[31893]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:16:57 compute-1 sudo[31891]: pam_unix(sudo:session): session closed for user root
Nov 24 09:16:58 compute-1 sudo[32043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frkaihnxpauhsbijxasqqyplqgvtupnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975817.6691391-154-261740231012070/AnsiballZ_file.py'
Nov 24 09:16:58 compute-1 sudo[32043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:16:58 compute-1 python3.9[32045]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:16:58 compute-1 sudo[32043]: pam_unix(sudo:session): session closed for user root
Nov 24 09:16:58 compute-1 sudo[32195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlfitzkwtftgmxhroycmarxuacmulzfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975818.512851-178-28972882223475/AnsiballZ_stat.py'
Nov 24 09:16:58 compute-1 sudo[32195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:16:58 compute-1 python3.9[32197]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:16:58 compute-1 sudo[32195]: pam_unix(sudo:session): session closed for user root
Nov 24 09:16:59 compute-1 sudo[32318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdjjukgkohpmwiwxbvjizcukkjsbwmdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975818.512851-178-28972882223475/AnsiballZ_copy.py'
Nov 24 09:16:59 compute-1 sudo[32318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:16:59 compute-1 python3.9[32320]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763975818.512851-178-28972882223475/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:16:59 compute-1 sudo[32318]: pam_unix(sudo:session): session closed for user root
Nov 24 09:17:00 compute-1 sudo[32470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqaqacfmfidfdbtzusaqafxfpofnhrcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975819.8593192-223-277023122736368/AnsiballZ_setup.py'
Nov 24 09:17:00 compute-1 sudo[32470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:17:00 compute-1 python3.9[32472]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:17:00 compute-1 sudo[32470]: pam_unix(sudo:session): session closed for user root
Nov 24 09:17:01 compute-1 sudo[32626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sibqtyvcohvhamyqsmdfjniftzzgsxcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975820.8301244-247-19116771328767/AnsiballZ_file.py'
Nov 24 09:17:01 compute-1 sudo[32626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:17:01 compute-1 python3.9[32628]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:17:01 compute-1 sudo[32626]: pam_unix(sudo:session): session closed for user root
Nov 24 09:17:01 compute-1 sudo[32778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aworuybvusdppvehilxbyklcwnpjipcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975821.5610986-274-88426022438797/AnsiballZ_file.py'
Nov 24 09:17:01 compute-1 sudo[32778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:17:01 compute-1 python3.9[32780]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:17:02 compute-1 sudo[32778]: pam_unix(sudo:session): session closed for user root
Nov 24 09:17:02 compute-1 python3.9[32930]: ansible-ansible.builtin.service_facts Invoked
Nov 24 09:17:06 compute-1 python3.9[33183]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:17:07 compute-1 python3.9[33333]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:17:08 compute-1 python3.9[33487]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:17:09 compute-1 sudo[33643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywcajfjiuozfwiycetoarbtnzpwxikhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975829.2633893-418-72727289741850/AnsiballZ_setup.py'
Nov 24 09:17:09 compute-1 sudo[33643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:17:09 compute-1 python3.9[33645]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:17:10 compute-1 sudo[33643]: pam_unix(sudo:session): session closed for user root
Nov 24 09:17:10 compute-1 sudo[33727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqoysxnsfcekfagqgjkbtccamjmbumsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975829.2633893-418-72727289741850/AnsiballZ_dnf.py'
Nov 24 09:17:10 compute-1 sudo[33727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:17:10 compute-1 python3.9[33729]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:17:54 compute-1 systemd[1]: Reloading.
Nov 24 09:17:54 compute-1 systemd-rc-local-generator[33931]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:17:54 compute-1 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 24 09:17:55 compute-1 systemd[1]: Reloading.
Nov 24 09:17:55 compute-1 systemd-rc-local-generator[33967]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:17:55 compute-1 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 24 09:17:55 compute-1 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 24 09:17:55 compute-1 systemd[1]: Reloading.
Nov 24 09:17:55 compute-1 systemd-rc-local-generator[34010]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:17:55 compute-1 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 24 09:17:55 compute-1 dbus-broker-launch[791]: Noticed file-system modification, trigger reload.
Nov 24 09:17:55 compute-1 dbus-broker-launch[791]: Noticed file-system modification, trigger reload.
Nov 24 09:17:55 compute-1 dbus-broker-launch[791]: Noticed file-system modification, trigger reload.
Nov 24 09:18:56 compute-1 kernel: SELinux:  Converting 2719 SID table entries...
Nov 24 09:18:56 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 09:18:56 compute-1 kernel: SELinux:  policy capability open_perms=1
Nov 24 09:18:56 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 09:18:56 compute-1 kernel: SELinux:  policy capability always_check_network=0
Nov 24 09:18:56 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 09:18:56 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 09:18:56 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 09:18:57 compute-1 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 24 09:18:57 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 09:18:57 compute-1 systemd[1]: Starting man-db-cache-update.service...
Nov 24 09:18:57 compute-1 systemd[1]: Reloading.
Nov 24 09:18:57 compute-1 systemd-rc-local-generator[34343]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:18:57 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 09:18:57 compute-1 sudo[33727]: pam_unix(sudo:session): session closed for user root
Nov 24 09:18:58 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 09:18:58 compute-1 systemd[1]: Finished man-db-cache-update.service.
Nov 24 09:18:58 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1.015s CPU time.
Nov 24 09:18:58 compute-1 systemd[1]: run-r350dbfdd1cf44a9ea168072e3ab10b75.service: Deactivated successfully.
Nov 24 09:19:11 compute-1 sudo[35251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orwjszniessaogogqqusonxgbqgohoev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975951.2301557-455-85742343062649/AnsiballZ_command.py'
Nov 24 09:19:11 compute-1 sudo[35251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:11 compute-1 python3.9[35253]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:19:12 compute-1 sudo[35251]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:13 compute-1 sudo[35532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpfvkpdzplygnkmtcjelqdwgavbjjvjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975953.039883-478-106668129259179/AnsiballZ_selinux.py'
Nov 24 09:19:13 compute-1 sudo[35532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:13 compute-1 python3.9[35534]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 24 09:19:13 compute-1 sudo[35532]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:14 compute-1 sudo[35684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kunkbyoosghfulorbdfunwldmkadlzat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975954.401655-511-229913516431221/AnsiballZ_command.py'
Nov 24 09:19:14 compute-1 sudo[35684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:14 compute-1 python3.9[35686]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 24 09:19:15 compute-1 sudo[35684]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:16 compute-1 sudo[35837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqstrttsgjlyoyqlqkshtaclpupsotlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975956.1909652-535-145843621765098/AnsiballZ_file.py'
Nov 24 09:19:16 compute-1 sudo[35837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:17 compute-1 python3.9[35839]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:19:17 compute-1 sudo[35837]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:17 compute-1 sudo[35989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eubsvksgtwpimnkefvsutqfysapidkbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975957.4898255-559-166778311223016/AnsiballZ_mount.py'
Nov 24 09:19:17 compute-1 sudo[35989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:18 compute-1 python3.9[35991]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 24 09:19:18 compute-1 sudo[35989]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:19 compute-1 sudo[36141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrjwlyzkfsqxytuhncfuaebnycyuwcfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975959.4031532-643-105653499718200/AnsiballZ_file.py'
Nov 24 09:19:19 compute-1 sudo[36141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:19 compute-1 python3.9[36143]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:19:19 compute-1 sudo[36141]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:20 compute-1 sudo[36293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcksdddauqridpewbujzzukspkhqpegn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975960.1317418-667-137260493172405/AnsiballZ_stat.py'
Nov 24 09:19:20 compute-1 sudo[36293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:22 compute-1 python3.9[36295]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:19:22 compute-1 sudo[36293]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:23 compute-1 sudo[36416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgpsfckdjimsegostrsmicjzeqbrmhll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975960.1317418-667-137260493172405/AnsiballZ_copy.py'
Nov 24 09:19:23 compute-1 sudo[36416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:23 compute-1 python3.9[36418]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763975960.1317418-667-137260493172405/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=544ccad07cd49583316075cf420b5b550bb4de77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:19:23 compute-1 sudo[36416]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:24 compute-1 sudo[36568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oljmfwnnkujwgrhpvjnmirebopekoiko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975964.2702663-739-227242765536929/AnsiballZ_stat.py'
Nov 24 09:19:24 compute-1 sudo[36568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:24 compute-1 python3.9[36570]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:19:24 compute-1 sudo[36568]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:25 compute-1 sudo[36720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxktsslojdwcnbseswnujusiarproapg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975964.938683-763-164335460622996/AnsiballZ_command.py'
Nov 24 09:19:25 compute-1 sudo[36720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:25 compute-1 python3.9[36722]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:19:25 compute-1 sudo[36720]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:25 compute-1 sudo[36873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mysnwivnokvnjtyevsevwbqptawestbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975965.688613-787-271552361798775/AnsiballZ_file.py'
Nov 24 09:19:25 compute-1 sudo[36873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:26 compute-1 python3.9[36875]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:19:26 compute-1 sudo[36873]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:27 compute-1 sudo[37025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wikdxrjqjhmbofwcugeugxlblzpxlaol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975966.698314-820-51958216712038/AnsiballZ_getent.py'
Nov 24 09:19:27 compute-1 sudo[37025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:27 compute-1 python3.9[37027]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 24 09:19:27 compute-1 sudo[37025]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:27 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 09:19:27 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 09:19:28 compute-1 sudo[37179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whhpxphdzjrvvybccxsvyqtqehtwbepq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975967.5113664-844-96938349428380/AnsiballZ_group.py'
Nov 24 09:19:28 compute-1 sudo[37179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:28 compute-1 python3.9[37181]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 09:19:28 compute-1 groupadd[37182]: group added to /etc/group: name=qemu, GID=107
Nov 24 09:19:28 compute-1 groupadd[37182]: group added to /etc/gshadow: name=qemu
Nov 24 09:19:28 compute-1 groupadd[37182]: new group: name=qemu, GID=107
Nov 24 09:19:28 compute-1 sudo[37179]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:29 compute-1 sudo[37337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkhzpojeunasfeemmawplhobaomervtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975968.4437673-868-131099545048862/AnsiballZ_user.py'
Nov 24 09:19:29 compute-1 sudo[37337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:29 compute-1 python3.9[37339]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 09:19:29 compute-1 useradd[37341]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Nov 24 09:19:29 compute-1 sudo[37337]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:30 compute-1 sudo[37497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbfodaijqtrinztudnfskntxufbrkcyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975969.7445514-892-127573328248886/AnsiballZ_getent.py'
Nov 24 09:19:30 compute-1 sudo[37497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:30 compute-1 python3.9[37499]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 24 09:19:30 compute-1 sudo[37497]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:30 compute-1 sudo[37650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dftomeatjasgswtcwwkkrzvypsghmvkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975970.4889913-916-9526090750791/AnsiballZ_group.py'
Nov 24 09:19:30 compute-1 sudo[37650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:30 compute-1 python3.9[37652]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 09:19:31 compute-1 groupadd[37653]: group added to /etc/group: name=hugetlbfs, GID=42477
Nov 24 09:19:31 compute-1 groupadd[37653]: group added to /etc/gshadow: name=hugetlbfs
Nov 24 09:19:31 compute-1 groupadd[37653]: new group: name=hugetlbfs, GID=42477
Nov 24 09:19:31 compute-1 sudo[37650]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:31 compute-1 sudo[37808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnozjkzmacfknbbhuoqyfivjhqteshbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975971.4035387-943-133504797648424/AnsiballZ_file.py'
Nov 24 09:19:31 compute-1 sudo[37808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:31 compute-1 python3.9[37810]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 24 09:19:31 compute-1 sudo[37808]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:32 compute-1 sudo[37960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmrkyowdcsxfmmkaazngzhuntcpzwlkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975972.4015849-976-37981987817790/AnsiballZ_dnf.py'
Nov 24 09:19:32 compute-1 sudo[37960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:32 compute-1 python3.9[37962]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:19:34 compute-1 sudo[37960]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:35 compute-1 sudo[38113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gleavfjphjtykukqyiadbwzzorgstoma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975974.7880745-1000-2885155037610/AnsiballZ_file.py'
Nov 24 09:19:35 compute-1 sudo[38113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:35 compute-1 python3.9[38115]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:19:35 compute-1 sudo[38113]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:35 compute-1 sudo[38265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsgxynstkbaucfvglximejfhwdyawnbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975975.5424674-1025-218862169985989/AnsiballZ_stat.py'
Nov 24 09:19:35 compute-1 sudo[38265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:36 compute-1 python3.9[38267]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:19:36 compute-1 sudo[38265]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:36 compute-1 sudo[38388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbkmlbvlzwpaixsbfqnqibttpwsukxrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975975.5424674-1025-218862169985989/AnsiballZ_copy.py'
Nov 24 09:19:36 compute-1 sudo[38388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:36 compute-1 python3.9[38390]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763975975.5424674-1025-218862169985989/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:19:36 compute-1 sudo[38388]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:37 compute-1 sudo[38540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjijgadswgobycwrmoytbevalyydvbsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975976.9254029-1069-238906315529184/AnsiballZ_systemd.py'
Nov 24 09:19:37 compute-1 sudo[38540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:37 compute-1 python3.9[38542]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:19:37 compute-1 systemd[1]: Starting Load Kernel Modules...
Nov 24 09:19:37 compute-1 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 24 09:19:37 compute-1 kernel: Bridge firewalling registered
Nov 24 09:19:37 compute-1 systemd-modules-load[38546]: Inserted module 'br_netfilter'
Nov 24 09:19:37 compute-1 systemd[1]: Finished Load Kernel Modules.
Nov 24 09:19:37 compute-1 sudo[38540]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:38 compute-1 sudo[38699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoekryiwswmdifxjqlqlirlqpfqfnvco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975978.0839388-1093-3652533125208/AnsiballZ_stat.py'
Nov 24 09:19:38 compute-1 sudo[38699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:38 compute-1 python3.9[38701]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:19:38 compute-1 sudo[38699]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:39 compute-1 sudo[38822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghtedrcvqpzcgpnwidcdymlrqfmnhndh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975978.0839388-1093-3652533125208/AnsiballZ_copy.py'
Nov 24 09:19:39 compute-1 sudo[38822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:39 compute-1 python3.9[38824]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763975978.0839388-1093-3652533125208/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:19:39 compute-1 sudo[38822]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:39 compute-1 sudo[38974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycesxpwgvexpalylpxtgwcufacmyewfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975979.7343793-1147-224102599926470/AnsiballZ_dnf.py'
Nov 24 09:19:39 compute-1 sudo[38974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:40 compute-1 python3.9[38976]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:19:42 compute-1 dbus-broker-launch[791]: Noticed file-system modification, trigger reload.
Nov 24 09:19:43 compute-1 dbus-broker-launch[791]: Noticed file-system modification, trigger reload.
Nov 24 09:19:43 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 09:19:43 compute-1 systemd[1]: Starting man-db-cache-update.service...
Nov 24 09:19:43 compute-1 systemd[1]: Reloading.
Nov 24 09:19:43 compute-1 systemd-rc-local-generator[39037]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:19:43 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 09:19:43 compute-1 sudo[38974]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:46 compute-1 python3.9[42308]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:19:46 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 09:19:46 compute-1 systemd[1]: Finished man-db-cache-update.service.
Nov 24 09:19:46 compute-1 systemd[1]: man-db-cache-update.service: Consumed 4.168s CPU time.
Nov 24 09:19:46 compute-1 systemd[1]: run-r19c6059400c6443391e1ed9fb4205468.service: Deactivated successfully.
Nov 24 09:19:47 compute-1 python3.9[42834]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 24 09:19:47 compute-1 python3.9[42984]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:19:48 compute-1 sudo[43134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcbvsrsgkilvqrisbqwbycrghkwiojzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975988.4505143-1264-153335238118067/AnsiballZ_command.py'
Nov 24 09:19:48 compute-1 sudo[43134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:48 compute-1 python3.9[43136]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:19:49 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 24 09:19:49 compute-1 systemd[1]: Starting Authorization Manager...
Nov 24 09:19:49 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 24 09:19:49 compute-1 polkitd[43353]: Started polkitd version 0.117
Nov 24 09:19:49 compute-1 polkitd[43353]: Loading rules from directory /etc/polkit-1/rules.d
Nov 24 09:19:49 compute-1 polkitd[43353]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 24 09:19:49 compute-1 polkitd[43353]: Finished loading, compiling and executing 2 rules
Nov 24 09:19:49 compute-1 polkitd[43353]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Nov 24 09:19:49 compute-1 systemd[1]: Started Authorization Manager.
Nov 24 09:19:49 compute-1 sudo[43134]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:50 compute-1 sudo[43521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouxxyhbwcdkerzsrnrhhgimoohdyjtnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975989.9938083-1291-95231526967961/AnsiballZ_systemd.py'
Nov 24 09:19:50 compute-1 sudo[43521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:50 compute-1 python3.9[43523]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:19:50 compute-1 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 24 09:19:50 compute-1 systemd[1]: tuned.service: Deactivated successfully.
Nov 24 09:19:50 compute-1 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 24 09:19:50 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 24 09:19:50 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 24 09:19:50 compute-1 sudo[43521]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:51 compute-1 python3.9[43685]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 24 09:19:54 compute-1 sudo[43835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anjypcojtshrhrihuogmxgvhhlyaoohw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975994.5832727-1462-99804431007015/AnsiballZ_systemd.py'
Nov 24 09:19:54 compute-1 sudo[43835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:55 compute-1 python3.9[43837]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:19:55 compute-1 systemd[1]: Reloading.
Nov 24 09:19:55 compute-1 systemd-rc-local-generator[43866]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:19:55 compute-1 sudo[43835]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:55 compute-1 sudo[44024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnrdmapvouvqozmebtkhfgqposhnfwhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975995.4891667-1462-63920418177249/AnsiballZ_systemd.py'
Nov 24 09:19:55 compute-1 sudo[44024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:56 compute-1 python3.9[44026]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:19:56 compute-1 systemd[1]: Reloading.
Nov 24 09:19:56 compute-1 systemd-rc-local-generator[44057]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:19:56 compute-1 sudo[44024]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:56 compute-1 sudo[44214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjvzanhuvpzwjrkxdvxcfcfwsayfvkxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975996.766135-1510-108873011625210/AnsiballZ_command.py'
Nov 24 09:19:57 compute-1 sudo[44214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:57 compute-1 python3.9[44216]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:19:57 compute-1 sudo[44214]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:57 compute-1 sudo[44367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyeexksghdihovxspejdqkcdfnnplegz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975997.4574454-1534-66299890574034/AnsiballZ_command.py'
Nov 24 09:19:57 compute-1 sudo[44367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:57 compute-1 python3.9[44369]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:19:57 compute-1 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 24 09:19:57 compute-1 sudo[44367]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:58 compute-1 sudo[44520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kifzmvgeothxjsluqccnfkmhkgzqjhav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975998.1415265-1558-131122031518043/AnsiballZ_command.py'
Nov 24 09:19:58 compute-1 sudo[44520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:58 compute-1 python3.9[44522]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:19:59 compute-1 sudo[44520]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:00 compute-1 sudo[44682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipojpwvzjxsiykrsugsnqsvrkopqpczr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976000.3089445-1582-234676860877886/AnsiballZ_command.py'
Nov 24 09:20:00 compute-1 sudo[44682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:00 compute-1 python3.9[44684]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:20:00 compute-1 sudo[44682]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:01 compute-1 sudo[44835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfqdtxoiuuleimasmrfaekflgkjkjvps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976000.9659793-1606-40399407365603/AnsiballZ_systemd.py'
Nov 24 09:20:01 compute-1 sudo[44835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:01 compute-1 python3.9[44837]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:20:01 compute-1 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 24 09:20:01 compute-1 systemd[1]: Stopped Apply Kernel Variables.
Nov 24 09:20:01 compute-1 systemd[1]: Stopping Apply Kernel Variables...
Nov 24 09:20:01 compute-1 systemd[1]: Starting Apply Kernel Variables...
Nov 24 09:20:01 compute-1 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 24 09:20:01 compute-1 systemd[1]: Finished Apply Kernel Variables.
Nov 24 09:20:01 compute-1 sudo[44835]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:02 compute-1 sshd-session[31263]: Connection closed by 192.168.122.30 port 35274
Nov 24 09:20:02 compute-1 sshd-session[31260]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:20:02 compute-1 systemd[1]: session-10.scope: Deactivated successfully.
Nov 24 09:20:02 compute-1 systemd[1]: session-10.scope: Consumed 2min 7.932s CPU time.
Nov 24 09:20:02 compute-1 systemd-logind[823]: Session 10 logged out. Waiting for processes to exit.
Nov 24 09:20:02 compute-1 systemd-logind[823]: Removed session 10.
Nov 24 09:20:08 compute-1 sshd-session[44868]: Accepted publickey for zuul from 192.168.122.30 port 42454 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:20:08 compute-1 systemd-logind[823]: New session 11 of user zuul.
Nov 24 09:20:08 compute-1 systemd[1]: Started Session 11 of User zuul.
Nov 24 09:20:08 compute-1 sshd-session[44868]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:20:09 compute-1 python3.9[45021]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:20:10 compute-1 sudo[45175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eofgmjnraxhzkzktoupftbthofqzatbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976009.8100033-69-118856928316907/AnsiballZ_getent.py'
Nov 24 09:20:10 compute-1 sudo[45175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:10 compute-1 python3.9[45177]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 24 09:20:10 compute-1 sudo[45175]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:11 compute-1 sudo[45328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cidiioiaavhrajsqfmfzzrpztcjxoxng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976010.6814716-93-235529811691567/AnsiballZ_group.py'
Nov 24 09:20:11 compute-1 sudo[45328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:11 compute-1 python3.9[45330]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 09:20:11 compute-1 groupadd[45331]: group added to /etc/group: name=openvswitch, GID=42476
Nov 24 09:20:11 compute-1 groupadd[45331]: group added to /etc/gshadow: name=openvswitch
Nov 24 09:20:11 compute-1 groupadd[45331]: new group: name=openvswitch, GID=42476
Nov 24 09:20:11 compute-1 sudo[45328]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:12 compute-1 sudo[45486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joxoqblnjqrpyangwpjvbhpwkgntnwmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976011.5930033-117-202455948478197/AnsiballZ_user.py'
Nov 24 09:20:12 compute-1 sudo[45486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:12 compute-1 python3.9[45488]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 09:20:12 compute-1 useradd[45490]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Nov 24 09:20:12 compute-1 useradd[45490]: add 'openvswitch' to group 'hugetlbfs'
Nov 24 09:20:12 compute-1 useradd[45490]: add 'openvswitch' to shadow group 'hugetlbfs'
Nov 24 09:20:12 compute-1 sudo[45486]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:13 compute-1 sudo[45646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtfvalikqaiadncxtmotrfdiwvcfttzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976012.7904427-147-157495899403630/AnsiballZ_setup.py'
Nov 24 09:20:13 compute-1 sudo[45646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:13 compute-1 python3.9[45648]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:20:13 compute-1 sudo[45646]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:13 compute-1 sudo[45730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxjtycsuigcfhmugzpzwhaukysjkhzxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976012.7904427-147-157495899403630/AnsiballZ_dnf.py'
Nov 24 09:20:13 compute-1 sudo[45730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:14 compute-1 python3.9[45732]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 09:20:16 compute-1 sudo[45730]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:17 compute-1 sudo[45894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uetlhsrfanalbqtcyscfeewvqieqwgux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976016.791723-189-47684027744367/AnsiballZ_dnf.py'
Nov 24 09:20:17 compute-1 sudo[45894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:17 compute-1 python3.9[45896]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:20:28 compute-1 kernel: SELinux:  Converting 2731 SID table entries...
Nov 24 09:20:28 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 09:20:28 compute-1 kernel: SELinux:  policy capability open_perms=1
Nov 24 09:20:28 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 09:20:28 compute-1 kernel: SELinux:  policy capability always_check_network=0
Nov 24 09:20:28 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 09:20:28 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 09:20:28 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 09:20:28 compute-1 groupadd[45919]: group added to /etc/group: name=unbound, GID=993
Nov 24 09:20:28 compute-1 groupadd[45919]: group added to /etc/gshadow: name=unbound
Nov 24 09:20:28 compute-1 groupadd[45919]: new group: name=unbound, GID=993
Nov 24 09:20:28 compute-1 useradd[45927]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Nov 24 09:20:28 compute-1 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 24 09:20:28 compute-1 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 24 09:20:29 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 09:20:29 compute-1 systemd[1]: Starting man-db-cache-update.service...
Nov 24 09:20:29 compute-1 systemd[1]: Reloading.
Nov 24 09:20:29 compute-1 systemd-rc-local-generator[46426]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:20:29 compute-1 systemd-sysv-generator[46429]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:20:29 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 09:20:30 compute-1 sudo[45894]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:30 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 09:20:30 compute-1 systemd[1]: Finished man-db-cache-update.service.
Nov 24 09:20:30 compute-1 systemd[1]: run-r10bde9b927b2472fbc8a3af1356e8ccd.service: Deactivated successfully.
Nov 24 09:20:31 compute-1 sudo[46994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxyleyvrcbaabmqdaxcrlxqvhlmozusv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976031.0807884-213-197937494400394/AnsiballZ_systemd.py'
Nov 24 09:20:31 compute-1 sudo[46994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:31 compute-1 python3.9[46996]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 09:20:32 compute-1 systemd[1]: Reloading.
Nov 24 09:20:32 compute-1 systemd-rc-local-generator[47026]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:20:32 compute-1 systemd-sysv-generator[47031]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:20:32 compute-1 systemd[1]: Starting Open vSwitch Database Unit...
Nov 24 09:20:32 compute-1 chown[47038]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 24 09:20:32 compute-1 ovs-ctl[47043]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 24 09:20:32 compute-1 ovs-ctl[47043]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 24 09:20:32 compute-1 ovs-ctl[47043]: Starting ovsdb-server [  OK  ]
Nov 24 09:20:32 compute-1 ovs-vsctl[47092]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 24 09:20:32 compute-1 ovs-vsctl[47108]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"803b139a-7fca-4549-8597-645cf677225d\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 24 09:20:32 compute-1 ovs-ctl[47043]: Configuring Open vSwitch system IDs [  OK  ]
Nov 24 09:20:32 compute-1 ovs-ctl[47043]: Enabling remote OVSDB managers [  OK  ]
Nov 24 09:20:32 compute-1 ovs-vsctl[47118]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Nov 24 09:20:32 compute-1 systemd[1]: Started Open vSwitch Database Unit.
Nov 24 09:20:32 compute-1 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 24 09:20:32 compute-1 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 24 09:20:32 compute-1 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 24 09:20:32 compute-1 kernel: openvswitch: Open vSwitch switching datapath
Nov 24 09:20:32 compute-1 ovs-ctl[47163]: Inserting openvswitch module [  OK  ]
Nov 24 09:20:32 compute-1 ovs-ctl[47131]: Starting ovs-vswitchd [  OK  ]
Nov 24 09:20:32 compute-1 ovs-vsctl[47181]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Nov 24 09:20:32 compute-1 ovs-ctl[47131]: Enabling remote OVSDB managers [  OK  ]
Nov 24 09:20:32 compute-1 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 24 09:20:32 compute-1 systemd[1]: Starting Open vSwitch...
Nov 24 09:20:32 compute-1 systemd[1]: Finished Open vSwitch.
Nov 24 09:20:32 compute-1 sudo[46994]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:33 compute-1 python3.9[47332]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:20:34 compute-1 sudo[47482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fglemirmgvsiijwbgewsmrzucdgimtsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976034.342918-267-41599416551768/AnsiballZ_sefcontext.py'
Nov 24 09:20:34 compute-1 sudo[47482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:35 compute-1 python3.9[47484]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 24 09:20:36 compute-1 kernel: SELinux:  Converting 2745 SID table entries...
Nov 24 09:20:36 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 09:20:36 compute-1 kernel: SELinux:  policy capability open_perms=1
Nov 24 09:20:36 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 09:20:36 compute-1 kernel: SELinux:  policy capability always_check_network=0
Nov 24 09:20:36 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 09:20:36 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 09:20:36 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 09:20:36 compute-1 sudo[47482]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:37 compute-1 python3.9[47639]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:20:38 compute-1 sudo[47795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxwfitlmesbhfxykcgukebrjzzpuybsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976037.8424127-321-196976579963034/AnsiballZ_dnf.py'
Nov 24 09:20:38 compute-1 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 24 09:20:38 compute-1 sudo[47795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:38 compute-1 python3.9[47797]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:20:39 compute-1 sudo[47795]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:40 compute-1 sudo[47948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnyhscqlspqprupkxzuxiyyrlkhcyllw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976039.8786225-345-280419983969571/AnsiballZ_command.py'
Nov 24 09:20:40 compute-1 sudo[47948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:40 compute-1 python3.9[47950]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:20:41 compute-1 sudo[47948]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:41 compute-1 sudo[48235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbaalqbwcepjicnmawxhcczjfsqzzuqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976041.5483358-369-43681694730572/AnsiballZ_file.py'
Nov 24 09:20:41 compute-1 sudo[48235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:42 compute-1 python3.9[48237]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 09:20:42 compute-1 sudo[48235]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:43 compute-1 python3.9[48387]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:20:43 compute-1 sudo[48539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzmxzflbrzqvfvnpydyzaoqywoueijnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976043.442696-417-269247852559122/AnsiballZ_dnf.py'
Nov 24 09:20:43 compute-1 sudo[48539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:43 compute-1 python3.9[48541]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:20:45 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 09:20:45 compute-1 systemd[1]: Starting man-db-cache-update.service...
Nov 24 09:20:45 compute-1 systemd[1]: Reloading.
Nov 24 09:20:45 compute-1 systemd-rc-local-generator[48579]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:20:45 compute-1 systemd-sysv-generator[48582]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:20:45 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 09:20:46 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 09:20:46 compute-1 systemd[1]: Finished man-db-cache-update.service.
Nov 24 09:20:46 compute-1 systemd[1]: run-r068d288e9acf47acb77677aa89baf31b.service: Deactivated successfully.
Nov 24 09:20:46 compute-1 sudo[48539]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:46 compute-1 sudo[48855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbkleawqpeuxncgfydinpntishwvgllp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976046.4799933-441-30278948741914/AnsiballZ_systemd.py'
Nov 24 09:20:46 compute-1 sudo[48855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:47 compute-1 python3.9[48857]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:20:47 compute-1 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 24 09:20:47 compute-1 systemd[1]: Stopped Network Manager Wait Online.
Nov 24 09:20:47 compute-1 systemd[1]: Stopping Network Manager Wait Online...
Nov 24 09:20:47 compute-1 systemd[1]: Stopping Network Manager...
Nov 24 09:20:47 compute-1 NetworkManager[7190]: <info>  [1763976047.0921] caught SIGTERM, shutting down normally.
Nov 24 09:20:47 compute-1 NetworkManager[7190]: <info>  [1763976047.0937] dhcp4 (eth0): canceled DHCP transaction
Nov 24 09:20:47 compute-1 NetworkManager[7190]: <info>  [1763976047.0937] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 09:20:47 compute-1 NetworkManager[7190]: <info>  [1763976047.0937] dhcp4 (eth0): state changed no lease
Nov 24 09:20:47 compute-1 NetworkManager[7190]: <info>  [1763976047.0939] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 09:20:47 compute-1 NetworkManager[7190]: <info>  [1763976047.1021] exiting (success)
Nov 24 09:20:47 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 09:20:47 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 09:20:47 compute-1 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 24 09:20:47 compute-1 systemd[1]: Stopped Network Manager.
Nov 24 09:20:47 compute-1 systemd[1]: NetworkManager.service: Consumed 9.382s CPU time, 4.3M memory peak, read 0B from disk, written 31.5K to disk.
Nov 24 09:20:47 compute-1 systemd[1]: Starting Network Manager...
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.1859] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:e3886539-ea72-4427-b33b-0060f8fadd32)
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.1861] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.1914] manager[0x556aaa69a090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 24 09:20:47 compute-1 systemd[1]: Starting Hostname Service...
Nov 24 09:20:47 compute-1 systemd[1]: Started Hostname Service.
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2745] hostname: hostname: using hostnamed
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2747] hostname: static hostname changed from (none) to "compute-1"
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2751] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2755] manager[0x556aaa69a090]: rfkill: Wi-Fi hardware radio set enabled
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2756] manager[0x556aaa69a090]: rfkill: WWAN hardware radio set enabled
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2775] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2782] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2782] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2783] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2783] manager: Networking is enabled by state file
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2785] settings: Loaded settings plugin: keyfile (internal)
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2788] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2809] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2817] dhcp: init: Using DHCP client 'internal'
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2819] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2823] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2827] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2834] device (lo): Activation: starting connection 'lo' (3dc9a73f-5008-4d54-b1f5-ae0263930821)
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2839] device (eth0): carrier: link connected
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2844] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2848] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2848] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2853] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2858] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2863] device (eth1): carrier: link connected
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2867] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2871] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (eed6ff3f-ed68-533f-b181-f50564eca501) (indicated)
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2871] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2875] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2880] device (eth1): Activation: starting connection 'ci-private-network' (eed6ff3f-ed68-533f-b181-f50564eca501)
Nov 24 09:20:47 compute-1 systemd[1]: Started Network Manager.
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2892] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2899] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2901] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2902] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2904] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2907] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2909] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2911] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2913] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2917] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2919] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2926] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2936] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2946] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2948] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2955] device (lo): Activation: successful, device activated.
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2965] dhcp4 (eth0): state changed new lease, address=38.129.56.228
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.2973] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 24 09:20:47 compute-1 systemd[1]: Starting Network Manager Wait Online...
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.3037] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.3042] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.3047] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.3051] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.3054] device (eth1): Activation: successful, device activated.
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.3061] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.3062] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.3065] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.3068] device (eth0): Activation: successful, device activated.
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.3071] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 24 09:20:47 compute-1 NetworkManager[48870]: <info>  [1763976047.3073] manager: startup complete
Nov 24 09:20:47 compute-1 systemd[1]: Finished Network Manager Wait Online.
Nov 24 09:20:47 compute-1 sudo[48855]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:47 compute-1 sudo[49081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfiodkgrqsxypsytvzsslntfgtsrsdwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976047.618335-465-228100087647984/AnsiballZ_dnf.py'
Nov 24 09:20:47 compute-1 sudo[49081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:48 compute-1 python3.9[49083]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:20:52 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 09:20:52 compute-1 systemd[1]: Starting man-db-cache-update.service...
Nov 24 09:20:52 compute-1 systemd[1]: Reloading.
Nov 24 09:20:52 compute-1 systemd-rc-local-generator[49134]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:20:52 compute-1 systemd-sysv-generator[49137]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:20:52 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 09:20:53 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 09:20:53 compute-1 systemd[1]: Finished man-db-cache-update.service.
Nov 24 09:20:53 compute-1 systemd[1]: run-r0618bd78c0cd45cd976e1acc0f0b9c7b.service: Deactivated successfully.
Nov 24 09:20:53 compute-1 sudo[49081]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:54 compute-1 sudo[49541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlfsrvedllymkpkokhdozwkbdjtlcboi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976054.0464478-501-170626564565398/AnsiballZ_stat.py'
Nov 24 09:20:54 compute-1 sudo[49541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:54 compute-1 python3.9[49543]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:20:54 compute-1 sudo[49541]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:55 compute-1 sudo[49693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzjctrkedhphwghvvkhdlkzwjpseyhne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976054.7489831-528-1450637318478/AnsiballZ_ini_file.py'
Nov 24 09:20:55 compute-1 sudo[49693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:55 compute-1 python3.9[49695]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:20:55 compute-1 sudo[49693]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:55 compute-1 sudo[49847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utuacpxdxomqsycnonybxpmlixbawjmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976055.6685386-558-244890317700618/AnsiballZ_ini_file.py'
Nov 24 09:20:55 compute-1 sudo[49847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:56 compute-1 python3.9[49849]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:20:56 compute-1 sudo[49847]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:56 compute-1 sudo[49999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htbijbovjmawtfmabcnamsvjomubvdkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976056.2053568-558-141105963855871/AnsiballZ_ini_file.py'
Nov 24 09:20:56 compute-1 sudo[49999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:56 compute-1 python3.9[50001]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:20:56 compute-1 sudo[49999]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:57 compute-1 sudo[50151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqgophksqeuycrcdlqtyuxoztnvwxpck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976056.957316-603-154498508076890/AnsiballZ_ini_file.py'
Nov 24 09:20:57 compute-1 sudo[50151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:57 compute-1 python3.9[50153]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:20:57 compute-1 sudo[50151]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:57 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 09:20:57 compute-1 sudo[50303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdlwjchfrxxgwxgrixdszvzbvmmagggo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976057.5395973-603-193868062721595/AnsiballZ_ini_file.py'
Nov 24 09:20:57 compute-1 sudo[50303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:57 compute-1 python3.9[50305]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:20:57 compute-1 sudo[50303]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:58 compute-1 sudo[50455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbpphwcfwsbjdnyukykehplrlrpfgfat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976058.335315-648-9885838827148/AnsiballZ_stat.py'
Nov 24 09:20:58 compute-1 sudo[50455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:58 compute-1 python3.9[50457]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:20:58 compute-1 sudo[50455]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:59 compute-1 sudo[50578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdomhhootxnkfwnkkbgaispskrjjxahy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976058.335315-648-9885838827148/AnsiballZ_copy.py'
Nov 24 09:20:59 compute-1 sudo[50578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:59 compute-1 python3.9[50580]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976058.335315-648-9885838827148/.source _original_basename=.51bezzs2 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:20:59 compute-1 sudo[50578]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:00 compute-1 sudo[50730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzfvxxmhlgipcveujmcmssovxfrxyzts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976059.7339458-693-215124191252510/AnsiballZ_file.py'
Nov 24 09:21:00 compute-1 sudo[50730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:00 compute-1 python3.9[50732]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:21:00 compute-1 sudo[50730]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:00 compute-1 sudo[50882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpbthljeqilzzelexfaorgxqelgrivys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976060.388301-717-190871873156889/AnsiballZ_edpm_os_net_config_mappings.py'
Nov 24 09:21:00 compute-1 sudo[50882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:00 compute-1 python3.9[50884]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 24 09:21:00 compute-1 sudo[50882]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:01 compute-1 sudo[51034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaxctzvttbcbfkcdugqmjvwxsstpbwsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976061.2497091-744-16996613457893/AnsiballZ_file.py'
Nov 24 09:21:01 compute-1 sudo[51034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:01 compute-1 python3.9[51036]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:21:01 compute-1 sudo[51034]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:02 compute-1 sudo[51186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwzjcekgdydqxdhqoakbtsvjhopxnuak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976062.1036334-774-232066578270424/AnsiballZ_stat.py'
Nov 24 09:21:02 compute-1 sudo[51186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:02 compute-1 sudo[51186]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:02 compute-1 sudo[51309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wumesjykdjnnpwdbarnmcjjiowjksvpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976062.1036334-774-232066578270424/AnsiballZ_copy.py'
Nov 24 09:21:02 compute-1 sudo[51309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:03 compute-1 sudo[51309]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:03 compute-1 sudo[51461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmladlqgemyhnltnjteirnjpiawbpmrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976063.4483316-819-217356666838798/AnsiballZ_slurp.py'
Nov 24 09:21:03 compute-1 sudo[51461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:04 compute-1 python3.9[51463]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 24 09:21:04 compute-1 sudo[51461]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:05 compute-1 sudo[51636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejlebfunfiqjpauxbbaxjhqmrzibfied ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976064.3764477-846-45376300257665/async_wrapper.py j48334358143 300 /home/zuul/.ansible/tmp/ansible-tmp-1763976064.3764477-846-45376300257665/AnsiballZ_edpm_os_net_config.py _'
Nov 24 09:21:05 compute-1 sudo[51636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:05 compute-1 ansible-async_wrapper.py[51638]: Invoked with j48334358143 300 /home/zuul/.ansible/tmp/ansible-tmp-1763976064.3764477-846-45376300257665/AnsiballZ_edpm_os_net_config.py _
Nov 24 09:21:05 compute-1 ansible-async_wrapper.py[51641]: Starting module and watcher
Nov 24 09:21:05 compute-1 ansible-async_wrapper.py[51641]: Start watching 51642 (300)
Nov 24 09:21:05 compute-1 ansible-async_wrapper.py[51642]: Start module (51642)
Nov 24 09:21:05 compute-1 ansible-async_wrapper.py[51638]: Return async_wrapper task started.
Nov 24 09:21:05 compute-1 sudo[51636]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:05 compute-1 python3.9[51643]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 24 09:21:06 compute-1 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 24 09:21:06 compute-1 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 24 09:21:06 compute-1 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 24 09:21:06 compute-1 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 24 09:21:06 compute-1 kernel: cfg80211: failed to load regulatory.db
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.1331] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51644 uid=0 result="success"
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.1345] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51644 uid=0 result="success"
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.1844] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.1845] audit: op="connection-add" uuid="d35f6803-0e92-4bfd-97f1-ccee68d7d040" name="br-ex-br" pid=51644 uid=0 result="success"
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.1860] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.1861] audit: op="connection-add" uuid="95cfb4ea-b324-4465-8750-11bbf20cc936" name="br-ex-port" pid=51644 uid=0 result="success"
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.1873] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.1874] audit: op="connection-add" uuid="77db1bad-0624-4996-a4d3-ef8dfa37fc78" name="eth1-port" pid=51644 uid=0 result="success"
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.1885] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.1886] audit: op="connection-add" uuid="7d7fc543-b162-40ec-b741-b4c932a38070" name="vlan20-port" pid=51644 uid=0 result="success"
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.1897] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.1898] audit: op="connection-add" uuid="07d677b0-58d6-4f26-a17d-bb6fe216da22" name="vlan21-port" pid=51644 uid=0 result="success"
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.1910] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.1911] audit: op="connection-add" uuid="38328766-b825-4c80-8aaa-43c40d6b880d" name="vlan22-port" pid=51644 uid=0 result="success"
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.1922] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.1923] audit: op="connection-add" uuid="7484f959-1f97-4645-ae70-84a4a9412fd4" name="vlan23-port" pid=51644 uid=0 result="success"
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.1942] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu,connection.timestamp,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51644 uid=0 result="success"
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.1959] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.1960] audit: op="connection-add" uuid="378995cd-982e-444e-b7a0-5d63ee4845e3" name="br-ex-if" pid=51644 uid=0 result="success"
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.1997] audit: op="connection-update" uuid="eed6ff3f-ed68-533f-b181-f50564eca501" name="ci-private-network" args="ipv6.routing-rules,ipv6.routes,ipv6.dns,ipv6.method,ipv6.addr-gen-mode,ipv6.addresses,connection.port-type,connection.master,connection.controller,connection.slave-type,connection.timestamp,ovs-external-ids.data,ipv4.never-default,ipv4.routing-rules,ipv4.dns,ipv4.method,ipv4.addresses,ipv4.routes,ovs-interface.type" pid=51644 uid=0 result="success"
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2012] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2014] audit: op="connection-add" uuid="1d905685-a79b-4db1-b617-8a5901f95b97" name="vlan20-if" pid=51644 uid=0 result="success"
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2029] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2032] audit: op="connection-add" uuid="b3e069b4-e07e-4c8c-934d-0f85b6caf1ac" name="vlan21-if" pid=51644 uid=0 result="success"
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2048] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2049] audit: op="connection-add" uuid="4d469ff9-403d-47f3-8256-03ede699020a" name="vlan22-if" pid=51644 uid=0 result="success"
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2065] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2066] audit: op="connection-add" uuid="b0b77b95-bf5f-405b-b2cb-4411bf049b86" name="vlan23-if" pid=51644 uid=0 result="success"
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2077] audit: op="connection-delete" uuid="06cf09d6-5a4c-316f-86b1-330e0eaa7366" name="Wired connection 1" pid=51644 uid=0 result="success"
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2089] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2099] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2102] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (d35f6803-0e92-4bfd-97f1-ccee68d7d040)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2103] audit: op="connection-activate" uuid="d35f6803-0e92-4bfd-97f1-ccee68d7d040" name="br-ex-br" pid=51644 uid=0 result="success"
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2104] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2110] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2114] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (95cfb4ea-b324-4465-8750-11bbf20cc936)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2116] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2121] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2124] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (77db1bad-0624-4996-a4d3-ef8dfa37fc78)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2126] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2132] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2135] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (7d7fc543-b162-40ec-b741-b4c932a38070)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2137] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2142] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2146] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (07d677b0-58d6-4f26-a17d-bb6fe216da22)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2148] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2153] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2157] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (38328766-b825-4c80-8aaa-43c40d6b880d)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2159] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2165] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2169] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (7484f959-1f97-4645-ae70-84a4a9412fd4)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2176] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2180] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2182] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2188] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2193] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2196] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (378995cd-982e-444e-b7a0-5d63ee4845e3)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2197] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2200] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2201] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2202] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2203] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2213] device (eth1): disconnecting for new activation request.
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2214] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2217] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2218] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2220] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2222] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2226] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2229] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (1d905685-a79b-4db1-b617-8a5901f95b97)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2230] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2233] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2234] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2235] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2237] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2242] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2246] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (b3e069b4-e07e-4c8c-934d-0f85b6caf1ac)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2246] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2250] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2252] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2253] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2257] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2262] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2267] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (4d469ff9-403d-47f3-8256-03ede699020a)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2268] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2270] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2273] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2274] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2277] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2281] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2284] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (b0b77b95-bf5f-405b-b2cb-4411bf049b86)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2285] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2288] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2290] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2292] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2293] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2306] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51644 uid=0 result="success"
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2308] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2311] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2313] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2320] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2323] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2327] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2331] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2332] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2337] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2341] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 kernel: ovs-system: entered promiscuous mode
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2343] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2345] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2350] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2355] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 systemd-udevd[51648]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 09:21:07 compute-1 kernel: Timeout policy base is empty
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2358] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2360] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2366] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2370] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2373] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2375] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2379] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2383] dhcp4 (eth0): canceled DHCP transaction
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2383] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2384] dhcp4 (eth0): state changed no lease
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2386] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2396] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2400] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51644 uid=0 result="fail" reason="Device is not activated"
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2406] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2414] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2421] device (eth1): disconnecting for new activation request.
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2422] audit: op="connection-activate" uuid="eed6ff3f-ed68-533f-b181-f50564eca501" name="ci-private-network" pid=51644 uid=0 result="success"
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2423] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2426] dhcp4 (eth0): state changed new lease, address=38.129.56.228
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2429] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 24 09:21:07 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2483] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51644 uid=0 result="success"
Nov 24 09:21:07 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2556] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 24 09:21:07 compute-1 kernel: br-ex: entered promiscuous mode
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2722] device (eth1): Activation: starting connection 'ci-private-network' (eed6ff3f-ed68-533f-b181-f50564eca501)
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2726] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2733] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2735] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2740] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2743] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2750] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2754] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2755] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2756] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2757] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2758] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2768] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2773] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2779] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2781] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2784] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2787] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2790] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2793] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2797] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2799] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2802] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2805] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2808] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2811] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2822] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2830] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 24 09:21:07 compute-1 kernel: vlan22: entered promiscuous mode
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2844] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2852] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2856] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2861] device (eth1): Activation: successful, device activated.
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2867] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2869] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2873] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 09:21:07 compute-1 kernel: vlan20: entered promiscuous mode
Nov 24 09:21:07 compute-1 systemd-udevd[51649]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 09:21:07 compute-1 kernel: vlan23: entered promiscuous mode
Nov 24 09:21:07 compute-1 systemd-udevd[51647]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.2989] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.3005] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.3018] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 24 09:21:07 compute-1 kernel: vlan21: entered promiscuous mode
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.3031] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.3048] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.3050] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.3057] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 09:21:07 compute-1 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.3070] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.3083] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.3116] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.3117] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.3121] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.3127] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.3131] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.3138] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.3148] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.3157] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.3193] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.3195] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-1 NetworkManager[48870]: <info>  [1763976067.3203] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 09:21:08 compute-1 NetworkManager[48870]: <info>  [1763976068.4497] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51644 uid=0 result="success"
Nov 24 09:21:08 compute-1 NetworkManager[48870]: <info>  [1763976068.6027] checkpoint[0x556aaa670950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 24 09:21:08 compute-1 NetworkManager[48870]: <info>  [1763976068.6030] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51644 uid=0 result="success"
Nov 24 09:21:08 compute-1 sudo[52002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqpaembkglhnvwdgmtmoztvziurgqfro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976068.371851-846-260402498690101/AnsiballZ_async_status.py'
Nov 24 09:21:08 compute-1 sudo[52002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:08 compute-1 NetworkManager[48870]: <info>  [1763976068.8968] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51644 uid=0 result="success"
Nov 24 09:21:08 compute-1 NetworkManager[48870]: <info>  [1763976068.8976] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51644 uid=0 result="success"
Nov 24 09:21:09 compute-1 python3.9[52004]: ansible-ansible.legacy.async_status Invoked with jid=j48334358143.51638 mode=status _async_dir=/root/.ansible_async
Nov 24 09:21:09 compute-1 sudo[52002]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:09 compute-1 NetworkManager[48870]: <info>  [1763976069.0658] audit: op="networking-control" arg="global-dns-configuration" pid=51644 uid=0 result="success"
Nov 24 09:21:09 compute-1 NetworkManager[48870]: <info>  [1763976069.0684] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 24 09:21:09 compute-1 NetworkManager[48870]: <info>  [1763976069.0710] audit: op="networking-control" arg="global-dns-configuration" pid=51644 uid=0 result="success"
Nov 24 09:21:09 compute-1 NetworkManager[48870]: <info>  [1763976069.0736] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51644 uid=0 result="success"
Nov 24 09:21:09 compute-1 NetworkManager[48870]: <info>  [1763976069.2418] checkpoint[0x556aaa670a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 24 09:21:09 compute-1 NetworkManager[48870]: <info>  [1763976069.2423] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51644 uid=0 result="success"
Nov 24 09:21:09 compute-1 ansible-async_wrapper.py[51642]: Module complete (51642)
Nov 24 09:21:10 compute-1 ansible-async_wrapper.py[51641]: Done in kid B.
Nov 24 09:21:12 compute-1 sudo[52106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chgorpaxvhnvlzqrukgdruylrzkxxopu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976068.371851-846-260402498690101/AnsiballZ_async_status.py'
Nov 24 09:21:12 compute-1 sudo[52106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:12 compute-1 python3.9[52108]: ansible-ansible.legacy.async_status Invoked with jid=j48334358143.51638 mode=status _async_dir=/root/.ansible_async
Nov 24 09:21:12 compute-1 sudo[52106]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:12 compute-1 sudo[52206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztcvpdzdbcbsszsxxvzmnwhykmyfxbaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976068.371851-846-260402498690101/AnsiballZ_async_status.py'
Nov 24 09:21:12 compute-1 sudo[52206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:12 compute-1 python3.9[52208]: ansible-ansible.legacy.async_status Invoked with jid=j48334358143.51638 mode=cleanup _async_dir=/root/.ansible_async
Nov 24 09:21:12 compute-1 sudo[52206]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:13 compute-1 sudo[52358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjwtixzyalonodclvkuhefdlzoysmkdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976073.2285266-927-218480229849270/AnsiballZ_stat.py'
Nov 24 09:21:13 compute-1 sudo[52358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:13 compute-1 python3.9[52360]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:21:13 compute-1 sudo[52358]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:13 compute-1 sudo[52481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhqhtxcczdcvcwuknajgjkbkgfjvwohz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976073.2285266-927-218480229849270/AnsiballZ_copy.py'
Nov 24 09:21:14 compute-1 sudo[52481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:14 compute-1 python3.9[52483]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976073.2285266-927-218480229849270/.source.returncode _original_basename=.1654v2hd follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:21:14 compute-1 sudo[52481]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:14 compute-1 sudo[52633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clvfhjxsbbjmcdiiurxhzqmbncamzpfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976074.5970228-975-185306397415548/AnsiballZ_stat.py'
Nov 24 09:21:14 compute-1 sudo[52633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:15 compute-1 python3.9[52635]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:21:15 compute-1 sudo[52633]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:15 compute-1 sudo[52756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxsyrbwnebdqvoamdhgqzumjcrfmmqne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976074.5970228-975-185306397415548/AnsiballZ_copy.py'
Nov 24 09:21:15 compute-1 sudo[52756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:15 compute-1 python3.9[52758]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976074.5970228-975-185306397415548/.source.cfg _original_basename=.310nocc9 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:21:15 compute-1 sudo[52756]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:16 compute-1 sudo[52909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lynqpxtrclkmehkhcnbvnzmisqwwxlxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976075.8158755-1020-242549639585674/AnsiballZ_systemd.py'
Nov 24 09:21:16 compute-1 sudo[52909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:16 compute-1 python3.9[52911]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:21:16 compute-1 systemd[1]: Reloading Network Manager...
Nov 24 09:21:16 compute-1 NetworkManager[48870]: <info>  [1763976076.4419] audit: op="reload" arg="0" pid=52915 uid=0 result="success"
Nov 24 09:21:16 compute-1 NetworkManager[48870]: <info>  [1763976076.4424] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 24 09:21:16 compute-1 systemd[1]: Reloaded Network Manager.
Nov 24 09:21:16 compute-1 sudo[52909]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:16 compute-1 sshd-session[44871]: Connection closed by 192.168.122.30 port 42454
Nov 24 09:21:16 compute-1 sshd-session[44868]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:21:16 compute-1 systemd-logind[823]: Session 11 logged out. Waiting for processes to exit.
Nov 24 09:21:16 compute-1 systemd[1]: session-11.scope: Deactivated successfully.
Nov 24 09:21:16 compute-1 systemd[1]: session-11.scope: Consumed 46.611s CPU time.
Nov 24 09:21:16 compute-1 systemd-logind[823]: Removed session 11.
Nov 24 09:21:17 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 09:21:22 compute-1 sshd-session[52948]: Accepted publickey for zuul from 192.168.122.30 port 33908 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:21:22 compute-1 systemd-logind[823]: New session 12 of user zuul.
Nov 24 09:21:22 compute-1 systemd[1]: Started Session 12 of User zuul.
Nov 24 09:21:22 compute-1 sshd-session[52948]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:21:23 compute-1 python3.9[53101]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:21:24 compute-1 python3.9[53255]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:21:25 compute-1 python3.9[53449]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:21:26 compute-1 sshd-session[52951]: Connection closed by 192.168.122.30 port 33908
Nov 24 09:21:26 compute-1 sshd-session[52948]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:21:26 compute-1 systemd[1]: session-12.scope: Deactivated successfully.
Nov 24 09:21:26 compute-1 systemd[1]: session-12.scope: Consumed 2.237s CPU time.
Nov 24 09:21:26 compute-1 systemd-logind[823]: Session 12 logged out. Waiting for processes to exit.
Nov 24 09:21:26 compute-1 systemd-logind[823]: Removed session 12.
Nov 24 09:21:26 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 09:21:31 compute-1 sshd-session[53477]: Accepted publickey for zuul from 192.168.122.30 port 57270 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:21:31 compute-1 systemd-logind[823]: New session 13 of user zuul.
Nov 24 09:21:31 compute-1 systemd[1]: Started Session 13 of User zuul.
Nov 24 09:21:31 compute-1 sshd-session[53477]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:21:32 compute-1 python3.9[53631]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:21:33 compute-1 python3.9[53785]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:21:34 compute-1 sudo[53939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewkucvmsojpsfkgtmrfdcnqfojxjogks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976094.1805198-81-132523499089640/AnsiballZ_setup.py'
Nov 24 09:21:34 compute-1 sudo[53939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:34 compute-1 python3.9[53941]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:21:34 compute-1 sudo[53939]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:35 compute-1 sudo[54024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srfyrovwrqdhazyzgqqcsozlaobijgwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976094.1805198-81-132523499089640/AnsiballZ_dnf.py'
Nov 24 09:21:35 compute-1 sudo[54024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:35 compute-1 python3.9[54026]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:21:36 compute-1 sudo[54024]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:37 compute-1 sudo[54177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkhyduobamymwyurhseyifmumsgnpcrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976097.3447804-117-32618900924966/AnsiballZ_setup.py'
Nov 24 09:21:37 compute-1 sudo[54177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:37 compute-1 python3.9[54179]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:21:38 compute-1 sudo[54177]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:38 compute-1 sudo[54373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkviveahrhoocsxkyhhabncpwvlirsbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976098.5474792-150-263382930116561/AnsiballZ_file.py'
Nov 24 09:21:38 compute-1 sudo[54373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:39 compute-1 python3.9[54375]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:21:39 compute-1 sudo[54373]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:39 compute-1 sudo[54525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kngcqerkdgrsfqbxzgyfogtkfmlqddbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976099.3902607-174-98321616731216/AnsiballZ_command.py'
Nov 24 09:21:39 compute-1 sudo[54525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:39 compute-1 python3.9[54527]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:21:39 compute-1 podman[54528]: 2025-11-24 09:21:39.983598565 +0000 UTC m=+0.043513895 system refresh
Nov 24 09:21:40 compute-1 sudo[54525]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:40 compute-1 sudo[54688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxxupmgcxiosoubuqgspsigzvsrpyoux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976100.317121-198-35324544005521/AnsiballZ_stat.py'
Nov 24 09:21:40 compute-1 sudo[54688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:40 compute-1 python3.9[54690]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:21:40 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:21:40 compute-1 sudo[54688]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:41 compute-1 sudo[54811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcosnugvubypgojshnbxlwkhnbygojoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976100.317121-198-35324544005521/AnsiballZ_copy.py'
Nov 24 09:21:41 compute-1 sudo[54811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:41 compute-1 python3.9[54813]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976100.317121-198-35324544005521/.source.json follow=False _original_basename=podman_network_config.j2 checksum=35abbe77809912ec8de56cd1324b6ed1d7c68760 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:21:41 compute-1 sudo[54811]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:42 compute-1 sudo[54963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifjssnypqeywhjkyafpubfqkphzpsmra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976102.617085-244-26992810290010/AnsiballZ_stat.py'
Nov 24 09:21:42 compute-1 sudo[54963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:43 compute-1 python3.9[54965]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:21:43 compute-1 sudo[54963]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:43 compute-1 sudo[55086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpfdgllkavykkkfcsxstrxfwaogzrxlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976102.617085-244-26992810290010/AnsiballZ_copy.py'
Nov 24 09:21:43 compute-1 sudo[55086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:43 compute-1 python3.9[55088]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763976102.617085-244-26992810290010/.source.conf follow=False _original_basename=registries.conf.j2 checksum=d119d0981ddb964361aab9d45fb39837ba29c925 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:21:43 compute-1 sudo[55086]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:44 compute-1 sudo[55238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cazioymdnwkvwunduozzqmeyziuyttqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976103.984695-291-13938415005892/AnsiballZ_ini_file.py'
Nov 24 09:21:44 compute-1 sudo[55238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:44 compute-1 python3.9[55240]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:21:44 compute-1 sudo[55238]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:44 compute-1 sudo[55390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byehrqfcjrkrwrvkkdgbcjppfqevdulz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976104.7575161-291-257687680929266/AnsiballZ_ini_file.py'
Nov 24 09:21:44 compute-1 sudo[55390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:45 compute-1 python3.9[55392]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:21:45 compute-1 sudo[55390]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:45 compute-1 sudo[55542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzshdnvkxqvsgopsxbqszvwistsqdryf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976105.3237925-291-81369278855084/AnsiballZ_ini_file.py'
Nov 24 09:21:45 compute-1 sudo[55542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:45 compute-1 python3.9[55544]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:21:45 compute-1 sudo[55542]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:46 compute-1 sudo[55694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dskpwhtwgzsushwbetkekrtcycpqnigh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976105.9349442-291-71803715847631/AnsiballZ_ini_file.py'
Nov 24 09:21:46 compute-1 sudo[55694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:46 compute-1 python3.9[55696]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:21:46 compute-1 sudo[55694]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:47 compute-1 sudo[55846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqwaleejouwxfcxrcwqdijibuezbdcgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976106.834855-384-610402273053/AnsiballZ_dnf.py'
Nov 24 09:21:47 compute-1 sudo[55846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:47 compute-1 python3.9[55848]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:21:48 compute-1 sudo[55846]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:49 compute-1 sudo[55999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvnyxfqsuaffvnnmfffgzlhtvjdlezoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976109.2015557-417-107660671612044/AnsiballZ_setup.py'
Nov 24 09:21:49 compute-1 sudo[55999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:49 compute-1 python3.9[56001]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:21:49 compute-1 sudo[55999]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:50 compute-1 sudo[56153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uacyynwgoqvcdapvliioqluqosktwjpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976109.9627-441-52468836177956/AnsiballZ_stat.py'
Nov 24 09:21:50 compute-1 sudo[56153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:50 compute-1 python3.9[56155]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:21:50 compute-1 sudo[56153]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:50 compute-1 sudo[56305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyqhglzwwnwpenrgymjiayjgtahwymlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976110.7055564-468-206672905058111/AnsiballZ_stat.py'
Nov 24 09:21:50 compute-1 sudo[56305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:51 compute-1 python3.9[56307]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:21:51 compute-1 sudo[56305]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:51 compute-1 sudo[56457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgwidaiqcabxohjgcykiyzjykwhojosj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976111.5906692-498-261554191382890/AnsiballZ_command.py'
Nov 24 09:21:51 compute-1 sudo[56457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:52 compute-1 python3.9[56459]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:21:52 compute-1 sudo[56457]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:52 compute-1 sudo[56610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vslsgzvgjlvzydinukhtjrihdefopuyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976112.522725-528-130330299387552/AnsiballZ_service_facts.py'
Nov 24 09:21:52 compute-1 sudo[56610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:53 compute-1 python3.9[56612]: ansible-service_facts Invoked
Nov 24 09:21:53 compute-1 network[56629]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 09:21:53 compute-1 network[56630]: 'network-scripts' will be removed from distribution in near future.
Nov 24 09:21:53 compute-1 network[56631]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 09:21:56 compute-1 sudo[56610]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:58 compute-1 sudo[56914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oymgubqpuxpeqjraenoujuskoloedjdk ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1763976117.9725876-573-265083543821491/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1763976117.9725876-573-265083543821491/args'
Nov 24 09:21:58 compute-1 sudo[56914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:58 compute-1 sudo[56914]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:59 compute-1 sudo[57081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpwpsfcpcpwkhwrbwutfpcfyckimqsdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976118.9992716-606-199952331043755/AnsiballZ_dnf.py'
Nov 24 09:21:59 compute-1 sudo[57081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:59 compute-1 python3.9[57083]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:22:00 compute-1 sudo[57081]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:01 compute-1 sudo[57234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuyydwmojtcizagwbhwogqsbxajhjrsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976121.3598006-645-169651053393506/AnsiballZ_package_facts.py'
Nov 24 09:22:01 compute-1 sudo[57234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:02 compute-1 python3.9[57236]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 24 09:22:02 compute-1 sudo[57234]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:03 compute-1 sudo[57386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsmyunwvxenoptvfeytcyuqsnypuxifv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976123.1217928-675-217674195329042/AnsiballZ_stat.py'
Nov 24 09:22:03 compute-1 sudo[57386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:03 compute-1 python3.9[57388]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:22:03 compute-1 sudo[57386]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:03 compute-1 sudo[57511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngyfmdoilbsmmbcqlnszgkbsjscdtbni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976123.1217928-675-217674195329042/AnsiballZ_copy.py'
Nov 24 09:22:03 compute-1 sudo[57511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:04 compute-1 python3.9[57513]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976123.1217928-675-217674195329042/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:04 compute-1 sudo[57511]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:04 compute-1 sudo[57665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbgdtpdgojwzgudtfwvicdmxpupspfqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976124.6239252-721-270838777346671/AnsiballZ_stat.py'
Nov 24 09:22:04 compute-1 sudo[57665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:05 compute-1 python3.9[57667]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:22:05 compute-1 sudo[57665]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:05 compute-1 sudo[57790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywsyjsolfyammbpepkxzbylcmogsdwav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976124.6239252-721-270838777346671/AnsiballZ_copy.py'
Nov 24 09:22:05 compute-1 sudo[57790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:05 compute-1 python3.9[57792]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976124.6239252-721-270838777346671/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:05 compute-1 sudo[57790]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:07 compute-1 sudo[57944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avxxvnezsjmaxudoweulefaszgcaapwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976126.8298607-784-121935616853494/AnsiballZ_lineinfile.py'
Nov 24 09:22:07 compute-1 sudo[57944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:07 compute-1 python3.9[57946]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:07 compute-1 sudo[57944]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:08 compute-1 sudo[58098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtezbjqqkyzguhfjildxamhrsbjsjleq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976128.4299178-828-177563498180754/AnsiballZ_setup.py'
Nov 24 09:22:08 compute-1 sudo[58098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:08 compute-1 python3.9[58100]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:22:09 compute-1 sudo[58098]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:09 compute-1 sudo[58182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guucbrlfcbjawfgsqjmktdcnbjayklli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976128.4299178-828-177563498180754/AnsiballZ_systemd.py'
Nov 24 09:22:09 compute-1 sudo[58182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:09 compute-1 python3.9[58184]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:22:10 compute-1 sudo[58182]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:11 compute-1 sudo[58336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqrzpspuogfjkphpueswdmlhjyeimjwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976130.934687-877-275617868890607/AnsiballZ_setup.py'
Nov 24 09:22:11 compute-1 sudo[58336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:11 compute-1 python3.9[58338]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:22:11 compute-1 sudo[58336]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:11 compute-1 sudo[58420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmwuyngkahwnojflslvtcvwuiojgyyyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976130.934687-877-275617868890607/AnsiballZ_systemd.py'
Nov 24 09:22:11 compute-1 sudo[58420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:12 compute-1 python3.9[58422]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:22:12 compute-1 chronyd[831]: chronyd exiting
Nov 24 09:22:12 compute-1 systemd[1]: Stopping NTP client/server...
Nov 24 09:22:12 compute-1 systemd[1]: chronyd.service: Deactivated successfully.
Nov 24 09:22:12 compute-1 systemd[1]: Stopped NTP client/server.
Nov 24 09:22:12 compute-1 systemd[1]: Starting NTP client/server...
Nov 24 09:22:12 compute-1 chronyd[58430]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 24 09:22:12 compute-1 chronyd[58430]: Frequency -23.792 +/- 0.073 ppm read from /var/lib/chrony/drift
Nov 24 09:22:12 compute-1 chronyd[58430]: Loaded seccomp filter (level 2)
Nov 24 09:22:12 compute-1 systemd[1]: Started NTP client/server.
Nov 24 09:22:12 compute-1 sudo[58420]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:12 compute-1 sshd-session[53481]: Connection closed by 192.168.122.30 port 57270
Nov 24 09:22:12 compute-1 sshd-session[53477]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:22:12 compute-1 systemd[1]: session-13.scope: Deactivated successfully.
Nov 24 09:22:12 compute-1 systemd[1]: session-13.scope: Consumed 24.028s CPU time.
Nov 24 09:22:12 compute-1 systemd-logind[823]: Session 13 logged out. Waiting for processes to exit.
Nov 24 09:22:12 compute-1 systemd-logind[823]: Removed session 13.
Nov 24 09:22:17 compute-1 sshd-session[58456]: Accepted publickey for zuul from 192.168.122.30 port 57942 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:22:17 compute-1 systemd-logind[823]: New session 14 of user zuul.
Nov 24 09:22:17 compute-1 systemd[1]: Started Session 14 of User zuul.
Nov 24 09:22:17 compute-1 sshd-session[58456]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:22:18 compute-1 sudo[58609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yumdfpogsafuanzvrghmtjsdafvvklpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976137.9072132-27-258095400858179/AnsiballZ_file.py'
Nov 24 09:22:18 compute-1 sudo[58609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:18 compute-1 python3.9[58611]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:18 compute-1 sudo[58609]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:19 compute-1 sudo[58761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfluhsqlvdytqjtkdgzrdfcoggjdgwjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976138.7907178-63-226891339846700/AnsiballZ_stat.py'
Nov 24 09:22:19 compute-1 sudo[58761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:19 compute-1 python3.9[58763]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:22:19 compute-1 sudo[58761]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:19 compute-1 sudo[58884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hchsrmsethgvuqznxoiwycsyvgdxiclo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976138.7907178-63-226891339846700/AnsiballZ_copy.py'
Nov 24 09:22:19 compute-1 sudo[58884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:20 compute-1 python3.9[58886]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976138.7907178-63-226891339846700/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:20 compute-1 sudo[58884]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:20 compute-1 sshd-session[58459]: Connection closed by 192.168.122.30 port 57942
Nov 24 09:22:20 compute-1 sshd-session[58456]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:22:20 compute-1 systemd[1]: session-14.scope: Deactivated successfully.
Nov 24 09:22:20 compute-1 systemd[1]: session-14.scope: Consumed 1.484s CPU time.
Nov 24 09:22:20 compute-1 systemd-logind[823]: Session 14 logged out. Waiting for processes to exit.
Nov 24 09:22:20 compute-1 systemd-logind[823]: Removed session 14.
Nov 24 09:22:25 compute-1 sshd-session[58911]: Accepted publickey for zuul from 192.168.122.30 port 57946 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:22:25 compute-1 systemd-logind[823]: New session 15 of user zuul.
Nov 24 09:22:25 compute-1 systemd[1]: Started Session 15 of User zuul.
Nov 24 09:22:25 compute-1 sshd-session[58911]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:22:27 compute-1 python3.9[59064]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:22:27 compute-1 sudo[59218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aogbgeycavvesyusiwbzmjjcjbylmiwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976147.5191689-60-127523442255094/AnsiballZ_file.py'
Nov 24 09:22:27 compute-1 sudo[59218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:28 compute-1 python3.9[59220]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:28 compute-1 sudo[59218]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:28 compute-1 sudo[59393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cumkqijrgjwttiyorzqybznccznkhimk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976148.4121735-84-172823111442343/AnsiballZ_stat.py'
Nov 24 09:22:28 compute-1 sudo[59393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:29 compute-1 python3.9[59395]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:22:29 compute-1 sudo[59393]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:29 compute-1 sudo[59516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knckvuamejlhritprxanvyklzgympjtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976148.4121735-84-172823111442343/AnsiballZ_copy.py'
Nov 24 09:22:29 compute-1 sudo[59516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:29 compute-1 python3.9[59518]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1763976148.4121735-84-172823111442343/.source.json _original_basename=.lcmvl22e follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:29 compute-1 sudo[59516]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:30 compute-1 sudo[59668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvqpukudrtkwmtaiwqptlzbtuwihbcnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976150.2118373-153-80396389662123/AnsiballZ_stat.py'
Nov 24 09:22:30 compute-1 sudo[59668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:30 compute-1 python3.9[59670]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:22:30 compute-1 sudo[59668]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:30 compute-1 sudo[59791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnpwyzgypxodedxwekwusqwmqbufhoqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976150.2118373-153-80396389662123/AnsiballZ_copy.py'
Nov 24 09:22:30 compute-1 sudo[59791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:31 compute-1 python3.9[59793]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976150.2118373-153-80396389662123/.source _original_basename=.4vx01pe0 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:31 compute-1 sudo[59791]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:31 compute-1 sudo[59943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-midhaqfibommznzoxccughtruyoenppc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976151.4940946-201-186399013919116/AnsiballZ_file.py'
Nov 24 09:22:31 compute-1 sudo[59943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:31 compute-1 python3.9[59945]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:22:31 compute-1 sudo[59943]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:32 compute-1 sudo[60095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdgmajazmsbhidbiypshqapztsvieeak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976152.293694-225-175621927301560/AnsiballZ_stat.py'
Nov 24 09:22:32 compute-1 sudo[60095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:32 compute-1 python3.9[60097]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:22:32 compute-1 sudo[60095]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:33 compute-1 sudo[60218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrjmeoklxwcsnzdqckchdwvweayyjtdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976152.293694-225-175621927301560/AnsiballZ_copy.py'
Nov 24 09:22:33 compute-1 sudo[60218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:33 compute-1 python3.9[60220]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763976152.293694-225-175621927301560/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:22:33 compute-1 sudo[60218]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:33 compute-1 sudo[60370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqiowimxffokjwxbhysxamqtqyfenlyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976153.4742823-225-237315767649989/AnsiballZ_stat.py'
Nov 24 09:22:33 compute-1 sudo[60370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:34 compute-1 python3.9[60372]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:22:34 compute-1 sudo[60370]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:34 compute-1 sudo[60493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycofdcwwulnkzyzrqwfmjyvzagnekher ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976153.4742823-225-237315767649989/AnsiballZ_copy.py'
Nov 24 09:22:34 compute-1 sudo[60493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:34 compute-1 python3.9[60495]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763976153.4742823-225-237315767649989/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:22:34 compute-1 sudo[60493]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:35 compute-1 sudo[60645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dagbqgjpttfklqlihhbsxwguswvsoldg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976154.8591008-312-69835723398667/AnsiballZ_file.py'
Nov 24 09:22:35 compute-1 sudo[60645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:35 compute-1 python3.9[60647]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:35 compute-1 sudo[60645]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:35 compute-1 sudo[60797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojxjscxkbircqjvrferihqpminuevjwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976155.6749961-336-136570983211301/AnsiballZ_stat.py'
Nov 24 09:22:35 compute-1 sudo[60797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:36 compute-1 python3.9[60799]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:22:36 compute-1 sudo[60797]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:36 compute-1 sudo[60920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myjayauifagqwnwwrwzfnhkkdehcprov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976155.6749961-336-136570983211301/AnsiballZ_copy.py'
Nov 24 09:22:36 compute-1 sudo[60920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:36 compute-1 python3.9[60922]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976155.6749961-336-136570983211301/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:36 compute-1 sudo[60920]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:37 compute-1 sudo[61072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtsdictpaymokuerotocadivwobhzmvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976157.065727-381-232516502848711/AnsiballZ_stat.py'
Nov 24 09:22:37 compute-1 sudo[61072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:37 compute-1 python3.9[61074]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:22:37 compute-1 sudo[61072]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:37 compute-1 sudo[61195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcjlivqqporjtlitiwqjvohsokqekvlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976157.065727-381-232516502848711/AnsiballZ_copy.py'
Nov 24 09:22:37 compute-1 sudo[61195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:38 compute-1 python3.9[61197]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976157.065727-381-232516502848711/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:38 compute-1 sudo[61195]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:38 compute-1 sudo[61347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qerlqfaaywzsjaulmgezkdjwwywffwdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976158.3967679-426-204140463281026/AnsiballZ_systemd.py'
Nov 24 09:22:38 compute-1 sudo[61347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:39 compute-1 python3.9[61349]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:22:39 compute-1 systemd[1]: Reloading.
Nov 24 09:22:39 compute-1 systemd-sysv-generator[61380]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:22:39 compute-1 systemd-rc-local-generator[61377]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:22:39 compute-1 systemd[1]: Reloading.
Nov 24 09:22:39 compute-1 systemd-rc-local-generator[61414]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:22:39 compute-1 systemd-sysv-generator[61417]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:22:39 compute-1 systemd[1]: Starting EDPM Container Shutdown...
Nov 24 09:22:39 compute-1 systemd[1]: Finished EDPM Container Shutdown.
Nov 24 09:22:39 compute-1 sudo[61347]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:40 compute-1 sudo[61575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxtqjgoegotsrrsarwmyxucrjkuntyhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976160.0381753-450-26281208721621/AnsiballZ_stat.py'
Nov 24 09:22:40 compute-1 sudo[61575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:40 compute-1 python3.9[61577]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:22:40 compute-1 sudo[61575]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:40 compute-1 sudo[61698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkovwcfzawdzswnawtumzqajrltpqsze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976160.0381753-450-26281208721621/AnsiballZ_copy.py'
Nov 24 09:22:40 compute-1 sudo[61698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:41 compute-1 python3.9[61700]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976160.0381753-450-26281208721621/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:41 compute-1 sudo[61698]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:41 compute-1 sudo[61850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nalnbcybecwpnpzcjirwwzwcylytuayz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976161.586404-495-53002319546640/AnsiballZ_stat.py'
Nov 24 09:22:41 compute-1 sudo[61850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:42 compute-1 python3.9[61852]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:22:42 compute-1 sudo[61850]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:42 compute-1 sudo[61973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orvhwwxohdgmfyouseoaekkvxuraevke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976161.586404-495-53002319546640/AnsiballZ_copy.py'
Nov 24 09:22:42 compute-1 sudo[61973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:42 compute-1 python3.9[61975]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976161.586404-495-53002319546640/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:42 compute-1 sudo[61973]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:43 compute-1 sudo[62125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljsfjavcxuwgundtgcvrbtrmodkdrbvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976162.9686131-540-153547618026288/AnsiballZ_systemd.py'
Nov 24 09:22:43 compute-1 sudo[62125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:43 compute-1 python3.9[62127]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:22:43 compute-1 systemd[1]: Reloading.
Nov 24 09:22:43 compute-1 systemd-sysv-generator[62159]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:22:43 compute-1 systemd-rc-local-generator[62155]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:22:43 compute-1 systemd[1]: Reloading.
Nov 24 09:22:43 compute-1 systemd-sysv-generator[62195]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:22:43 compute-1 systemd-rc-local-generator[62192]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:22:43 compute-1 systemd[1]: Starting Create netns directory...
Nov 24 09:22:43 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 09:22:43 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 09:22:43 compute-1 systemd[1]: Finished Create netns directory.
Nov 24 09:22:44 compute-1 sudo[62125]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:45 compute-1 python3.9[62354]: ansible-ansible.builtin.service_facts Invoked
Nov 24 09:22:45 compute-1 network[62371]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 09:22:45 compute-1 network[62372]: 'network-scripts' will be removed from distribution in near future.
Nov 24 09:22:45 compute-1 network[62373]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 09:22:50 compute-1 sudo[62633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhcexhsiwjkvxbbubxapeofidcxcrjrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976169.7314339-588-263575619064677/AnsiballZ_systemd.py'
Nov 24 09:22:50 compute-1 sudo[62633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:50 compute-1 python3.9[62635]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:22:50 compute-1 systemd[1]: Reloading.
Nov 24 09:22:50 compute-1 systemd-sysv-generator[62670]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:22:50 compute-1 systemd-rc-local-generator[62666]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:22:50 compute-1 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 24 09:22:50 compute-1 iptables.init[62676]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 24 09:22:50 compute-1 iptables.init[62676]: iptables: Flushing firewall rules: [  OK  ]
Nov 24 09:22:50 compute-1 systemd[1]: iptables.service: Deactivated successfully.
Nov 24 09:22:50 compute-1 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 24 09:22:50 compute-1 sudo[62633]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:51 compute-1 sudo[62870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbupwdefvcsjphkcjoqqnwgqfwaxngjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976171.0697365-588-249224517659116/AnsiballZ_systemd.py'
Nov 24 09:22:51 compute-1 sudo[62870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:51 compute-1 python3.9[62872]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:22:51 compute-1 sudo[62870]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:52 compute-1 sudo[63024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juetzlhclvsndyhgokbvmaxysqwwzrxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976172.3426762-636-56029069126131/AnsiballZ_systemd.py'
Nov 24 09:22:52 compute-1 sudo[63024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:52 compute-1 python3.9[63026]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:22:53 compute-1 systemd[1]: Reloading.
Nov 24 09:22:53 compute-1 systemd-rc-local-generator[63055]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:22:53 compute-1 systemd-sysv-generator[63058]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:22:53 compute-1 systemd[1]: Starting Netfilter Tables...
Nov 24 09:22:53 compute-1 systemd[1]: Finished Netfilter Tables.
Nov 24 09:22:53 compute-1 sudo[63024]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:54 compute-1 sudo[63215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrmbqleawvnqyuwimcdwspvaltmaqngi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976174.382563-660-80479457940906/AnsiballZ_command.py'
Nov 24 09:22:54 compute-1 sudo[63215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:54 compute-1 python3.9[63217]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:22:55 compute-1 sudo[63215]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:56 compute-1 sudo[63368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyfajkaccutsafvureaxiwswkfnsnnew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976175.7773805-702-137520813282911/AnsiballZ_stat.py'
Nov 24 09:22:56 compute-1 sudo[63368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:56 compute-1 python3.9[63370]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:22:56 compute-1 sudo[63368]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:56 compute-1 sudo[63493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajgnrbdxnhlersuzifblzvoeckuqudca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976175.7773805-702-137520813282911/AnsiballZ_copy.py'
Nov 24 09:22:56 compute-1 sudo[63493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:56 compute-1 python3.9[63495]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976175.7773805-702-137520813282911/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:56 compute-1 sudo[63493]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:57 compute-1 sudo[63646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaffbtsrzkkkokbtbfzuhudnkhxylsig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976177.187801-747-234842312977536/AnsiballZ_systemd.py'
Nov 24 09:22:57 compute-1 sudo[63646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:57 compute-1 python3.9[63648]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:22:57 compute-1 systemd[1]: Reloading OpenSSH server daemon...
Nov 24 09:22:57 compute-1 sshd[1006]: Received SIGHUP; restarting.
Nov 24 09:22:57 compute-1 systemd[1]: Reloaded OpenSSH server daemon.
Nov 24 09:22:57 compute-1 sshd[1006]: Server listening on 0.0.0.0 port 22.
Nov 24 09:22:57 compute-1 sshd[1006]: Server listening on :: port 22.
Nov 24 09:22:57 compute-1 sudo[63646]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:59 compute-1 sudo[63802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anjklebfqgcrnjvgpgrayqxvcwaazixk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976178.9428568-771-231089964810706/AnsiballZ_file.py'
Nov 24 09:22:59 compute-1 sudo[63802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:59 compute-1 python3.9[63804]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:59 compute-1 sudo[63802]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:59 compute-1 sudo[63954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbqsobtudsqhkwhiobjayuvwzlnytjqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976179.7181048-795-102538838295799/AnsiballZ_stat.py'
Nov 24 09:22:59 compute-1 sudo[63954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:00 compute-1 python3.9[63956]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:23:00 compute-1 sudo[63954]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:00 compute-1 sudo[64077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgmvohnuuzupmxisvwgsvztexbpwayns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976179.7181048-795-102538838295799/AnsiballZ_copy.py'
Nov 24 09:23:00 compute-1 sudo[64077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:00 compute-1 python3.9[64079]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976179.7181048-795-102538838295799/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:00 compute-1 sudo[64077]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:01 compute-1 sudo[64229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhlayzdtkjwkxifklbpgqcekiwvqacyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976181.3241677-849-203084630400841/AnsiballZ_timezone.py'
Nov 24 09:23:01 compute-1 sudo[64229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:01 compute-1 python3.9[64231]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 24 09:23:01 compute-1 systemd[1]: Starting Time & Date Service...
Nov 24 09:23:02 compute-1 systemd[1]: Started Time & Date Service.
Nov 24 09:23:02 compute-1 sudo[64229]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:02 compute-1 sudo[64385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utvthrcgwkjpxldkiojicfcbslcwqitn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976182.3962097-876-246617537630591/AnsiballZ_file.py'
Nov 24 09:23:02 compute-1 sudo[64385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:02 compute-1 python3.9[64387]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:02 compute-1 sudo[64385]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:03 compute-1 sudo[64537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysyeqtbnyekqzskqzlhnziussgjxnnve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976183.057765-900-243609680268420/AnsiballZ_stat.py'
Nov 24 09:23:03 compute-1 sudo[64537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:03 compute-1 python3.9[64539]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:23:03 compute-1 sudo[64537]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:03 compute-1 sudo[64660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzaxbkdddsouhncflcjosmckbisrzlvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976183.057765-900-243609680268420/AnsiballZ_copy.py'
Nov 24 09:23:03 compute-1 sudo[64660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:04 compute-1 python3.9[64662]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976183.057765-900-243609680268420/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:04 compute-1 sudo[64660]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:04 compute-1 sudo[64812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faguydxrpvtihgttmtmluagefguudzow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976184.3838298-945-146699640676737/AnsiballZ_stat.py'
Nov 24 09:23:04 compute-1 sudo[64812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:04 compute-1 python3.9[64814]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:23:04 compute-1 sudo[64812]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:05 compute-1 sudo[64935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnqlcycdolfffkovyswmozwhcfmqquud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976184.3838298-945-146699640676737/AnsiballZ_copy.py'
Nov 24 09:23:05 compute-1 sudo[64935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:05 compute-1 python3.9[64937]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976184.3838298-945-146699640676737/.source.yaml _original_basename=.fscg11hm follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:05 compute-1 sudo[64935]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:05 compute-1 sudo[65087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sofvioyaxpllukupgjfncxdxvvolsfth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976185.672321-990-261392881013558/AnsiballZ_stat.py'
Nov 24 09:23:05 compute-1 sudo[65087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:06 compute-1 python3.9[65089]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:23:06 compute-1 sudo[65087]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:06 compute-1 sudo[65210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-punszdhgahczunnxskjlhzgjrinzgsla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976185.672321-990-261392881013558/AnsiballZ_copy.py'
Nov 24 09:23:06 compute-1 sudo[65210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:06 compute-1 python3.9[65212]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976185.672321-990-261392881013558/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:06 compute-1 sudo[65210]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:07 compute-1 sudo[65362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukqvyghdhwpgbjzlynslahwkbgkoiwcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976186.9752982-1036-184017385854350/AnsiballZ_command.py'
Nov 24 09:23:07 compute-1 sudo[65362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:07 compute-1 python3.9[65364]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:23:07 compute-1 sudo[65362]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:07 compute-1 sudo[65515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbeemoufcffdzyxzjtfvsiygsnubvqug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976187.7022445-1060-56702840974186/AnsiballZ_command.py'
Nov 24 09:23:07 compute-1 sudo[65515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:08 compute-1 python3.9[65517]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:23:08 compute-1 sudo[65515]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:08 compute-1 sudo[65668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouzbfvtyyonepjplgsczlqascngvqays ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763976188.409141-1083-266013616571130/AnsiballZ_edpm_nftables_from_files.py'
Nov 24 09:23:08 compute-1 sudo[65668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:08 compute-1 python3[65670]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 09:23:09 compute-1 sudo[65668]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:09 compute-1 sudo[65820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvjemtrozprazldrirzmmoeqmyunawqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976189.2337089-1107-184560609197846/AnsiballZ_stat.py'
Nov 24 09:23:09 compute-1 sudo[65820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:09 compute-1 python3.9[65822]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:23:09 compute-1 sudo[65820]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:09 compute-1 sudo[65943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijamtiutdiqhbyzohrsgxwyklzwvklcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976189.2337089-1107-184560609197846/AnsiballZ_copy.py'
Nov 24 09:23:09 compute-1 sudo[65943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:10 compute-1 python3.9[65945]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976189.2337089-1107-184560609197846/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:10 compute-1 sudo[65943]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:10 compute-1 sudo[66095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhqstvpeznpjadruuprdzeoxaeqtvohg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976190.5981045-1152-204616539127448/AnsiballZ_stat.py'
Nov 24 09:23:10 compute-1 sudo[66095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:11 compute-1 python3.9[66097]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:23:11 compute-1 sudo[66095]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:11 compute-1 sudo[66218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duycnscuyuceojyshisclnwuvqyqhvhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976190.5981045-1152-204616539127448/AnsiballZ_copy.py'
Nov 24 09:23:11 compute-1 sudo[66218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:11 compute-1 python3.9[66220]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976190.5981045-1152-204616539127448/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:11 compute-1 sudo[66218]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:12 compute-1 sudo[66370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psgrrzuvhehpburenjdcejytzftlwqch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976192.020344-1197-36554317248963/AnsiballZ_stat.py'
Nov 24 09:23:12 compute-1 sudo[66370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:12 compute-1 python3.9[66372]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:23:12 compute-1 sudo[66370]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:12 compute-1 sudo[66493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahdngbygglkvtkeeeyrnhiwdjvbojalx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976192.020344-1197-36554317248963/AnsiballZ_copy.py'
Nov 24 09:23:12 compute-1 sudo[66493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:13 compute-1 python3.9[66495]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976192.020344-1197-36554317248963/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:13 compute-1 sudo[66493]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:13 compute-1 sudo[66645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xokytrnsoeogtepedbazrwffpbjdvezg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976193.4498825-1242-273305705793837/AnsiballZ_stat.py'
Nov 24 09:23:13 compute-1 sudo[66645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:13 compute-1 python3.9[66647]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:23:13 compute-1 sudo[66645]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:14 compute-1 sudo[66768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pckhxyocjuqmubglvtveufysgdnevdhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976193.4498825-1242-273305705793837/AnsiballZ_copy.py'
Nov 24 09:23:14 compute-1 sudo[66768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:14 compute-1 python3.9[66770]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976193.4498825-1242-273305705793837/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:14 compute-1 sudo[66768]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:15 compute-1 sudo[66920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouypggetxafryzjrspxfwqhdkmfbzwpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976194.8697999-1287-259037734698884/AnsiballZ_stat.py'
Nov 24 09:23:15 compute-1 sudo[66920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:15 compute-1 python3.9[66922]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:23:15 compute-1 sudo[66920]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:15 compute-1 sudo[67043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nidckanhavxtnmslrtysijsjoausmxpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976194.8697999-1287-259037734698884/AnsiballZ_copy.py'
Nov 24 09:23:15 compute-1 sudo[67043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:15 compute-1 python3.9[67045]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976194.8697999-1287-259037734698884/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:15 compute-1 sudo[67043]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:16 compute-1 sudo[67195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjkkouzqzwmycmtuzgwpffzqcalypltz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976196.3564942-1332-220321215093548/AnsiballZ_file.py'
Nov 24 09:23:16 compute-1 sudo[67195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:16 compute-1 python3.9[67197]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:16 compute-1 sudo[67195]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:17 compute-1 sudo[67347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuuxulicjwolxpimdoserwtvremsohgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976197.0918274-1356-157055752534266/AnsiballZ_command.py'
Nov 24 09:23:17 compute-1 sudo[67347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:17 compute-1 python3.9[67349]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:23:17 compute-1 sudo[67347]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:18 compute-1 sudo[67506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpraxhuygjqnealxcvkavrjxbzkoicqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976197.9367595-1380-238576996590428/AnsiballZ_blockinfile.py'
Nov 24 09:23:18 compute-1 sudo[67506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:18 compute-1 python3.9[67508]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:18 compute-1 sudo[67506]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:19 compute-1 sudo[67659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvlbtwfekoqeydgnrbeikffmjrrlezim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976198.9321432-1407-96413061192960/AnsiballZ_file.py'
Nov 24 09:23:19 compute-1 sudo[67659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:19 compute-1 python3.9[67661]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:19 compute-1 sudo[67659]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:19 compute-1 sudo[67811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toqlgcwtrdleqqrtzoonumicviuvvnoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976199.5748677-1407-19973674892809/AnsiballZ_file.py'
Nov 24 09:23:19 compute-1 sudo[67811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:20 compute-1 python3.9[67813]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:20 compute-1 sudo[67811]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:20 compute-1 sudo[67963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pziusfybxlybfftwavxvodkdhnskmskp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976200.3944635-1452-53936771090585/AnsiballZ_mount.py'
Nov 24 09:23:20 compute-1 sudo[67963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:21 compute-1 python3.9[67965]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 09:23:21 compute-1 sudo[67963]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:21 compute-1 sudo[68116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmixvfzzaetleqykdgcerafjvajhsksb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976201.239082-1452-29461368750393/AnsiballZ_mount.py'
Nov 24 09:23:21 compute-1 sudo[68116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:21 compute-1 python3.9[68118]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 09:23:21 compute-1 sudo[68116]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:22 compute-1 sshd-session[58914]: Connection closed by 192.168.122.30 port 57946
Nov 24 09:23:22 compute-1 sshd-session[58911]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:23:22 compute-1 systemd[1]: session-15.scope: Deactivated successfully.
Nov 24 09:23:22 compute-1 systemd[1]: session-15.scope: Consumed 32.205s CPU time.
Nov 24 09:23:22 compute-1 systemd-logind[823]: Session 15 logged out. Waiting for processes to exit.
Nov 24 09:23:22 compute-1 systemd-logind[823]: Removed session 15.
Nov 24 09:23:28 compute-1 sshd-session[68144]: Accepted publickey for zuul from 192.168.122.30 port 40344 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:23:28 compute-1 systemd-logind[823]: New session 16 of user zuul.
Nov 24 09:23:28 compute-1 systemd[1]: Started Session 16 of User zuul.
Nov 24 09:23:28 compute-1 sshd-session[68144]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:23:28 compute-1 sudo[68297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pogjzkjioqzbcscndysoezcxdxgpkvtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976208.324706-19-262016553014635/AnsiballZ_tempfile.py'
Nov 24 09:23:28 compute-1 sudo[68297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:28 compute-1 python3.9[68299]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 24 09:23:29 compute-1 sudo[68297]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:29 compute-1 sudo[68449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjcvjxgavggnftpclkogvvaohpydotus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976209.2851286-55-280634106168878/AnsiballZ_stat.py'
Nov 24 09:23:29 compute-1 sudo[68449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:29 compute-1 python3.9[68451]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:23:29 compute-1 sudo[68449]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:30 compute-1 sudo[68601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdzppdtvsrnacntlktpriwgxwwujmost ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976210.2671769-85-111234643518995/AnsiballZ_setup.py'
Nov 24 09:23:30 compute-1 sudo[68601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:31 compute-1 python3.9[68603]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:23:31 compute-1 sudo[68601]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:31 compute-1 sudo[68753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcfyxalamohvqzyavrgiehxjvwdakudw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976211.5323436-110-57014417527245/AnsiballZ_blockinfile.py'
Nov 24 09:23:31 compute-1 sudo[68753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:32 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 24 09:23:32 compute-1 python3.9[68755]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDnPh2FYKCqB5Rxe2d73LAea+vmvipLFksP43GM8QFNtdkL9UXsBFKIlbvhCArQ0+q5/EXcOy13rEWVabeuzYdek35bvnCWnqrlaoEFqEV7Y7SDrutMHxHvnLthse/1jj4AvtjvQXG0bKruDgtz2CBksRaKWTEHPZHLOYOwWLGogWVazacOPagjlMQ9UdpYvwfqgKnjMpl6sHCvQC7C0kTNvrYrrhUZqReUWyggx/XcC/YJvSYvMW1wNRhYmypPzEXu8QXt0ywHvCucILZcZqBE1/lKAUCLqDEkB/xpMnKiZ/EmDtyv8AP7H231WeEoaU4BziaD2jSd/H6lr2JJwpKBlrGkti8gQpJHtDytAtbVtrLD5fW+1GkobqN/2GXjNnvzuLB36OhT4nysfJ6BPP3sgaaZ2RJSzP5hI3jfFVn/NYjbaRIoo+tOB50PJeIPj6c5uMX+Qcb2V6EOUwogIRhtwN7A1XHh8dQPCUVYCUmNIq1K7NZ3Hxf+BqhVsSj6SK0=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINu5/fR7YXhb91kwrOd7U+mnimdcm+o61ru6zTYmFIZO
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJFgzeIWa1Ve+dIxs7Pjz8TnBGpgkm/KAIeb7PoVU+QfPqP68TrTBJjwgq/5DOilENFVsFmr+3WdERS0uMWfxXo=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyBn9mTS8EhHsIKYO0tLgGtKOo5KK33vyjqFzXOs43ZcW8GNKmSQ7DXnq80OCGGkDE9aL5uVEQ82MaYpYE8rZVZGrTF1heqhLe2ModNgcaUA+dBOzScRYEm5JAsj6ajcAc7fiPseazHiC80XQlEo+bwF6XHf/i9t7MHMqQCKdM+qnsEd6JeYe+Zy6X7Web4mN4mbvDaHxjBAdxuR0g0bKoYRjFeeNQyQQ/2Fpsa/i/ZqFVU59TrQ1vm9wLk9wJQd7mBQsdxizekzHGMkE5Ub8VdN43iscVyKKhZWeUOyEK2HASt+n/fHjIsFD65a4GLiHFuJ8DJ4CrWFrwt1RIXLkNFOImjH5kiMO55d/Qogf5F33Mkto3ntPQP/tShtBEDIzc9JCE7vYLFjk/bMSUcK9/u41E8suBkZBHnzXC8+eB6XCoYYNxA+cowaSg5+YCSxL6yON9u34LV+i3jZosNYNivLHjOmOsyGEs/Az6NLkHYzxYCHY042etu9Py2/lONrk=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDX1cMQF3siye3qNUS07EBS+iX+poG1/aIqFR51WsltV
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFy78zaPxoZwc0f5pE0EdJcb6EwSlQGeMhelmYFBlrBeD2fH3vCrxrTbbmmM9DSQFtIo8sNV7/s7CV9dvbvMOzQ=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYj9G0Ft/Psyl/13EAEebfB7qR7surocLwWTVKKcclTBPrKIFnHkxuGFUee1a6DQGup+ENEdhJN2MOXFv/jskxJUsoILDHuvx17jHKFvMSR7ycfe+1umEqgfKCHGxlLXobZjj7t2PzAveNkTk+zeX8pqLH1q86LI01fH0n3jdSksqEXvxbiDLMspPTM3alGxNI4pztPvN3i+0qfCPD5SL9dhFsP4C8IVTBWAM4g7Qd6LyKhx+MVoEVecLL6jsM8z+zArVsZKFcZOKFpl0MTeWdpNR0b4u0ILO59y38D/dVoM45NRDpIi7HyoS7TsD0XpP+3zP8hGo4M35QU+a9YRmdCaUChLmqjfUprjnQrusAuQfP406rQ3JlgWs3YAwF0IPhvHv57pPWm3xGwKPFpO0Jguw5cQdZZvYk4tS9JvlCz5+Yyfm3+9T+k1KLfcZ+zlvOYKz+BXNiPfk1bF9ML7/KEIyJjGf32o5nEp0H1sH24wrSIroXa+woila4KBTffe8=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFQe/vdPzZywzEntIohbfJ9grfNBp30Atbg8qy8BeQ3c
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPhaUxRkg9RrudtznCKCcwWhf1hoSfCyCfTHlGI62beVEpMD4en9bzfcuYnvB/Qm3vgzgUVMpS53KCL9bmqBfT8=
                                             create=True mode=0644 path=/tmp/ansible.lkmmo6bm state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:32 compute-1 sudo[68753]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:32 compute-1 sudo[68907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrgmrwhohscwrrcohuxtkxxpzjciqeas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976212.482459-134-132215436521171/AnsiballZ_command.py'
Nov 24 09:23:32 compute-1 sudo[68907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:33 compute-1 python3.9[68909]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.lkmmo6bm' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:23:33 compute-1 sudo[68907]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:33 compute-1 sudo[69061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrumradhqppldjkjbgdrgouwevyrqrvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976213.3564603-158-75359656224481/AnsiballZ_file.py'
Nov 24 09:23:33 compute-1 sudo[69061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:33 compute-1 python3.9[69063]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.lkmmo6bm state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:34 compute-1 sudo[69061]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:34 compute-1 sshd-session[68147]: Connection closed by 192.168.122.30 port 40344
Nov 24 09:23:34 compute-1 sshd-session[68144]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:23:34 compute-1 systemd[1]: session-16.scope: Deactivated successfully.
Nov 24 09:23:34 compute-1 systemd[1]: session-16.scope: Consumed 3.297s CPU time.
Nov 24 09:23:34 compute-1 systemd-logind[823]: Session 16 logged out. Waiting for processes to exit.
Nov 24 09:23:34 compute-1 systemd-logind[823]: Removed session 16.
Nov 24 09:23:40 compute-1 sshd-session[69088]: Accepted publickey for zuul from 192.168.122.30 port 50094 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:23:40 compute-1 systemd-logind[823]: New session 17 of user zuul.
Nov 24 09:23:40 compute-1 systemd[1]: Started Session 17 of User zuul.
Nov 24 09:23:40 compute-1 sshd-session[69088]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:23:41 compute-1 python3.9[69241]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:23:42 compute-1 sudo[69395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnfmplwfkcltgdotlalybnmqbsyqvbnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976222.1785924-57-11059537945133/AnsiballZ_systemd.py'
Nov 24 09:23:42 compute-1 sudo[69395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:43 compute-1 python3.9[69397]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 24 09:23:43 compute-1 sudo[69395]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:43 compute-1 sudo[69549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wffxgkgwcoaydlhjaztckrkeygkmzknd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976223.416758-81-150955865667924/AnsiballZ_systemd.py'
Nov 24 09:23:43 compute-1 sudo[69549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:43 compute-1 python3.9[69551]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:23:43 compute-1 sudo[69549]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:44 compute-1 sudo[69702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efwirgvwmjljhyuyvsxgxxvxyxsuemaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976224.4002428-108-95887789265357/AnsiballZ_command.py'
Nov 24 09:23:44 compute-1 sudo[69702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:44 compute-1 python3.9[69704]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:23:45 compute-1 sudo[69702]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:45 compute-1 sudo[69855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqbwttlpaeudycqqfelbhofakugjgjpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976225.3059118-132-265594589195313/AnsiballZ_stat.py'
Nov 24 09:23:45 compute-1 sudo[69855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:45 compute-1 python3.9[69857]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:23:45 compute-1 sudo[69855]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:46 compute-1 sudo[70009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utmjysubgdmhdlxhjilfyazlgqpghpfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976226.228357-156-76729990256018/AnsiballZ_command.py'
Nov 24 09:23:46 compute-1 sudo[70009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:46 compute-1 python3.9[70011]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:23:46 compute-1 sudo[70009]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:47 compute-1 sudo[70164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxxqdrjdtjmllvmwqtzicwtrqeblafbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976227.079283-180-75598536548033/AnsiballZ_file.py'
Nov 24 09:23:47 compute-1 sudo[70164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:47 compute-1 python3.9[70166]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:47 compute-1 sudo[70164]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:48 compute-1 sshd-session[69091]: Connection closed by 192.168.122.30 port 50094
Nov 24 09:23:48 compute-1 sshd-session[69088]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:23:48 compute-1 systemd[1]: session-17.scope: Deactivated successfully.
Nov 24 09:23:48 compute-1 systemd[1]: session-17.scope: Consumed 4.082s CPU time.
Nov 24 09:23:48 compute-1 systemd-logind[823]: Session 17 logged out. Waiting for processes to exit.
Nov 24 09:23:48 compute-1 systemd-logind[823]: Removed session 17.
Nov 24 09:23:52 compute-1 sshd-session[70192]: Accepted publickey for zuul from 192.168.122.30 port 49132 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:23:52 compute-1 systemd-logind[823]: New session 18 of user zuul.
Nov 24 09:23:53 compute-1 systemd[1]: Started Session 18 of User zuul.
Nov 24 09:23:53 compute-1 sshd-session[70192]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:23:54 compute-1 python3.9[70345]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:23:54 compute-1 sudo[70499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtmafnigmqxzqrpbojzohcbknvdcpqpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976234.6964586-63-39335916522246/AnsiballZ_setup.py'
Nov 24 09:23:54 compute-1 sudo[70499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:55 compute-1 python3.9[70501]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:23:55 compute-1 sudo[70499]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:55 compute-1 sudo[70583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enwazpzduwavcpdsuozmizrnvizsdvxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976234.6964586-63-39335916522246/AnsiballZ_dnf.py'
Nov 24 09:23:55 compute-1 sudo[70583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:56 compute-1 python3.9[70585]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 09:23:57 compute-1 sudo[70583]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:58 compute-1 python3.9[70736]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:23:59 compute-1 python3.9[70887]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 09:24:00 compute-1 python3.9[71037]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:24:00 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 09:24:00 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 09:24:00 compute-1 python3.9[71188]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:24:01 compute-1 sshd-session[70195]: Connection closed by 192.168.122.30 port 49132
Nov 24 09:24:01 compute-1 sshd-session[70192]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:24:01 compute-1 systemd[1]: session-18.scope: Deactivated successfully.
Nov 24 09:24:01 compute-1 systemd[1]: session-18.scope: Consumed 5.468s CPU time.
Nov 24 09:24:01 compute-1 systemd-logind[823]: Session 18 logged out. Waiting for processes to exit.
Nov 24 09:24:01 compute-1 systemd-logind[823]: Removed session 18.
Nov 24 09:24:10 compute-1 sshd-session[71213]: Accepted publickey for zuul from 38.129.56.127 port 49808 ssh2: RSA SHA256:UBnduE29/r4JICQE22jchpBfdroBtCYqENielfKVzAM
Nov 24 09:24:10 compute-1 systemd-logind[823]: New session 19 of user zuul.
Nov 24 09:24:10 compute-1 systemd[1]: Started Session 19 of User zuul.
Nov 24 09:24:10 compute-1 sshd-session[71213]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:24:10 compute-1 sudo[71289]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vplcrciriklpdoprybekcmavtfmfbqia ; /usr/bin/python3'
Nov 24 09:24:10 compute-1 sudo[71289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:11 compute-1 useradd[71293]: new group: name=ceph-admin, GID=42478
Nov 24 09:24:11 compute-1 useradd[71293]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Nov 24 09:24:11 compute-1 sudo[71289]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:11 compute-1 sudo[71375]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnnvputtqgvdggqfzcpouonzqqzfqejn ; /usr/bin/python3'
Nov 24 09:24:11 compute-1 sudo[71375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:11 compute-1 sudo[71375]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:12 compute-1 sudo[71448]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxovxxywjexnxttypfcidlxfyuvrwzkp ; /usr/bin/python3'
Nov 24 09:24:12 compute-1 sudo[71448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:12 compute-1 sudo[71448]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:12 compute-1 sudo[71498]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipkepxuzfizcshyuqhwydtyjutvkrtlj ; /usr/bin/python3'
Nov 24 09:24:12 compute-1 sudo[71498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:12 compute-1 sudo[71498]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:13 compute-1 sudo[71524]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmargapafwouwczlczmysfjhjaiqiwai ; /usr/bin/python3'
Nov 24 09:24:13 compute-1 sudo[71524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:13 compute-1 sudo[71524]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:13 compute-1 sudo[71550]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvmydycxeisrkyaotltmrfrfhhyxnerw ; /usr/bin/python3'
Nov 24 09:24:13 compute-1 sudo[71550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:13 compute-1 sudo[71550]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:14 compute-1 sudo[71576]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sijesqmpecvcpdaldvhnrhnwkfzxctyn ; /usr/bin/python3'
Nov 24 09:24:14 compute-1 sudo[71576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:14 compute-1 sudo[71576]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:14 compute-1 sudo[71654]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfxzhuhokneitcdbsuqmpzayjgjpwqzm ; /usr/bin/python3'
Nov 24 09:24:14 compute-1 sudo[71654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:14 compute-1 sudo[71654]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:14 compute-1 sudo[71727]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-purnevfkrynxlzyyttkugfkkgscfpgat ; /usr/bin/python3'
Nov 24 09:24:14 compute-1 sudo[71727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:15 compute-1 sudo[71727]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:15 compute-1 sudo[71829]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyxncellxolgtthiaealewynxnnvcukj ; /usr/bin/python3'
Nov 24 09:24:15 compute-1 sudo[71829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:15 compute-1 sudo[71829]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:15 compute-1 sudo[71902]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slwirlycwxkriunhnnjwpdtwczeubglx ; /usr/bin/python3'
Nov 24 09:24:16 compute-1 sudo[71902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:16 compute-1 sudo[71902]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:16 compute-1 sudo[71952]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khuyvjgucqiccawdfwmczmyieijsubam ; /usr/bin/python3'
Nov 24 09:24:16 compute-1 sudo[71952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:17 compute-1 python3[71954]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:24:17 compute-1 sudo[71952]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:18 compute-1 sudo[72047]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apusfdnynhzduewwwmcmitrorvqyoqqd ; /usr/bin/python3'
Nov 24 09:24:18 compute-1 sudo[72047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:19 compute-1 python3[72049]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 24 09:24:20 compute-1 sudo[72047]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:20 compute-1 sudo[72074]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giieuuyerjurhvrwhccamjoznppccebe ; /usr/bin/python3'
Nov 24 09:24:20 compute-1 sudo[72074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:20 compute-1 python3[72076]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 09:24:20 compute-1 sudo[72074]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:21 compute-1 sudo[72100]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkrwxtffumtssepbhxibfweowhsvqmph ; /usr/bin/python3'
Nov 24 09:24:21 compute-1 sudo[72100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:21 compute-1 python3[72102]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:24:21 compute-1 kernel: loop: module loaded
Nov 24 09:24:21 compute-1 kernel: loop3: detected capacity change from 0 to 41943040
Nov 24 09:24:21 compute-1 sudo[72100]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:21 compute-1 sudo[72135]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctqpqzumrvumtcntdjbrlbadcmimuvsx ; /usr/bin/python3'
Nov 24 09:24:21 compute-1 sudo[72135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:21 compute-1 python3[72137]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:24:21 compute-1 chronyd[58430]: Selected source 23.133.168.246 (pool.ntp.org)
Nov 24 09:24:21 compute-1 lvm[72140]: PV /dev/loop3 not used.
Nov 24 09:24:21 compute-1 lvm[72149]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:24:21 compute-1 sudo[72135]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:21 compute-1 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 24 09:24:22 compute-1 lvm[72151]:   1 logical volume(s) in volume group "ceph_vg0" now active
Nov 24 09:24:22 compute-1 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 24 09:24:22 compute-1 sudo[72227]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjysycyewwpbetsbalidbmkvbvbfixzl ; /usr/bin/python3'
Nov 24 09:24:22 compute-1 sudo[72227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:22 compute-1 python3[72229]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 09:24:22 compute-1 sudo[72227]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:22 compute-1 sudo[72300]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljfyapvlyothpmvudkgpwvvqssfrpgsa ; /usr/bin/python3'
Nov 24 09:24:22 compute-1 sudo[72300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:22 compute-1 python3[72302]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763976262.2802742-36787-166273025161474/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:24:22 compute-1 sudo[72300]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:23 compute-1 sudo[72350]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbxnhabhzrpuiksfkgsplstrydnzjhqw ; /usr/bin/python3'
Nov 24 09:24:23 compute-1 sudo[72350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:23 compute-1 python3[72352]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:24:23 compute-1 systemd[1]: Reloading.
Nov 24 09:24:23 compute-1 systemd-rc-local-generator[72380]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:24:23 compute-1 systemd-sysv-generator[72385]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:24:24 compute-1 systemd[1]: Starting Ceph OSD losetup...
Nov 24 09:24:24 compute-1 bash[72392]: /dev/loop3: [64513]:4194934 (/var/lib/ceph-osd-0.img)
Nov 24 09:24:24 compute-1 systemd[1]: Finished Ceph OSD losetup.
Nov 24 09:24:24 compute-1 lvm[72393]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:24:24 compute-1 lvm[72393]: VG ceph_vg0 finished
Nov 24 09:24:24 compute-1 sudo[72350]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:26 compute-1 python3[72417]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:25:50 compute-1 sshd-session[72461]: Accepted publickey for ceph-admin from 192.168.122.100 port 55750 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:25:50 compute-1 systemd[1]: Created slice User Slice of UID 42477.
Nov 24 09:25:50 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 24 09:25:50 compute-1 systemd-logind[823]: New session 20 of user ceph-admin.
Nov 24 09:25:50 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 24 09:25:50 compute-1 systemd[1]: Starting User Manager for UID 42477...
Nov 24 09:25:50 compute-1 systemd[72465]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:51 compute-1 systemd[72465]: Queued start job for default target Main User Target.
Nov 24 09:25:51 compute-1 systemd[72465]: Created slice User Application Slice.
Nov 24 09:25:51 compute-1 systemd[72465]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 24 09:25:51 compute-1 systemd[72465]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 09:25:51 compute-1 systemd[72465]: Reached target Paths.
Nov 24 09:25:51 compute-1 systemd[72465]: Reached target Timers.
Nov 24 09:25:51 compute-1 systemd[72465]: Starting D-Bus User Message Bus Socket...
Nov 24 09:25:51 compute-1 sshd-session[72478]: Accepted publickey for ceph-admin from 192.168.122.100 port 55758 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:25:51 compute-1 systemd[72465]: Starting Create User's Volatile Files and Directories...
Nov 24 09:25:51 compute-1 systemd-logind[823]: New session 22 of user ceph-admin.
Nov 24 09:25:51 compute-1 systemd[72465]: Listening on D-Bus User Message Bus Socket.
Nov 24 09:25:51 compute-1 systemd[72465]: Finished Create User's Volatile Files and Directories.
Nov 24 09:25:51 compute-1 systemd[72465]: Reached target Sockets.
Nov 24 09:25:51 compute-1 systemd[72465]: Reached target Basic System.
Nov 24 09:25:51 compute-1 systemd[72465]: Reached target Main User Target.
Nov 24 09:25:51 compute-1 systemd[72465]: Startup finished in 121ms.
Nov 24 09:25:51 compute-1 systemd[1]: Started User Manager for UID 42477.
Nov 24 09:25:51 compute-1 systemd[1]: Started Session 20 of User ceph-admin.
Nov 24 09:25:51 compute-1 systemd[1]: Started Session 22 of User ceph-admin.
Nov 24 09:25:51 compute-1 sshd-session[72461]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:51 compute-1 sshd-session[72478]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:51 compute-1 sudo[72485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:25:51 compute-1 sudo[72485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:51 compute-1 sudo[72485]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:51 compute-1 sshd-session[72510]: Accepted publickey for ceph-admin from 192.168.122.100 port 55774 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:25:51 compute-1 systemd-logind[823]: New session 23 of user ceph-admin.
Nov 24 09:25:51 compute-1 systemd[1]: Started Session 23 of User ceph-admin.
Nov 24 09:25:51 compute-1 sshd-session[72510]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:51 compute-1 sudo[72514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-1
Nov 24 09:25:51 compute-1 sudo[72514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:51 compute-1 sudo[72514]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:51 compute-1 sshd-session[72539]: Accepted publickey for ceph-admin from 192.168.122.100 port 55784 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:25:51 compute-1 systemd-logind[823]: New session 24 of user ceph-admin.
Nov 24 09:25:51 compute-1 systemd[1]: Started Session 24 of User ceph-admin.
Nov 24 09:25:51 compute-1 sshd-session[72539]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:51 compute-1 sudo[72543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Nov 24 09:25:51 compute-1 sudo[72543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:51 compute-1 sudo[72543]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:52 compute-1 sshd-session[72568]: Accepted publickey for ceph-admin from 192.168.122.100 port 55788 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:25:52 compute-1 systemd-logind[823]: New session 25 of user ceph-admin.
Nov 24 09:25:52 compute-1 systemd[1]: Started Session 25 of User ceph-admin.
Nov 24 09:25:52 compute-1 sshd-session[72568]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:52 compute-1 sudo[72572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:52 compute-1 sudo[72572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:52 compute-1 sudo[72572]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:52 compute-1 sshd-session[72597]: Accepted publickey for ceph-admin from 192.168.122.100 port 55794 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:25:52 compute-1 systemd-logind[823]: New session 26 of user ceph-admin.
Nov 24 09:25:52 compute-1 systemd[1]: Started Session 26 of User ceph-admin.
Nov 24 09:25:52 compute-1 sshd-session[72597]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:52 compute-1 sudo[72601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:52 compute-1 sudo[72601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:52 compute-1 sudo[72601]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:52 compute-1 sshd-session[72626]: Accepted publickey for ceph-admin from 192.168.122.100 port 55808 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:25:52 compute-1 systemd-logind[823]: New session 27 of user ceph-admin.
Nov 24 09:25:52 compute-1 systemd[1]: Started Session 27 of User ceph-admin.
Nov 24 09:25:52 compute-1 sshd-session[72626]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:52 compute-1 sudo[72630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Nov 24 09:25:52 compute-1 sudo[72630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:52 compute-1 sudo[72630]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:53 compute-1 sshd-session[72655]: Accepted publickey for ceph-admin from 192.168.122.100 port 55814 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:25:53 compute-1 systemd-logind[823]: New session 28 of user ceph-admin.
Nov 24 09:25:53 compute-1 systemd[1]: Started Session 28 of User ceph-admin.
Nov 24 09:25:53 compute-1 sshd-session[72655]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:53 compute-1 sudo[72659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:53 compute-1 sudo[72659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:53 compute-1 sudo[72659]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:53 compute-1 sshd-session[72684]: Accepted publickey for ceph-admin from 192.168.122.100 port 55826 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:25:53 compute-1 systemd-logind[823]: New session 29 of user ceph-admin.
Nov 24 09:25:53 compute-1 systemd[1]: Started Session 29 of User ceph-admin.
Nov 24 09:25:53 compute-1 sshd-session[72684]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:53 compute-1 sudo[72688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Nov 24 09:25:53 compute-1 sudo[72688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:53 compute-1 sudo[72688]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:53 compute-1 sshd-session[72713]: Accepted publickey for ceph-admin from 192.168.122.100 port 55836 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:25:53 compute-1 systemd-logind[823]: New session 30 of user ceph-admin.
Nov 24 09:25:53 compute-1 systemd[1]: Started Session 30 of User ceph-admin.
Nov 24 09:25:53 compute-1 sshd-session[72713]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:54 compute-1 sshd-session[72740]: Accepted publickey for ceph-admin from 192.168.122.100 port 55844 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:25:54 compute-1 systemd-logind[823]: New session 31 of user ceph-admin.
Nov 24 09:25:54 compute-1 systemd[1]: Started Session 31 of User ceph-admin.
Nov 24 09:25:54 compute-1 sshd-session[72740]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:54 compute-1 sudo[72744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Nov 24 09:25:54 compute-1 sudo[72744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:54 compute-1 sudo[72744]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:55 compute-1 sshd-session[72769]: Accepted publickey for ceph-admin from 192.168.122.100 port 55852 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:25:55 compute-1 systemd-logind[823]: New session 32 of user ceph-admin.
Nov 24 09:25:55 compute-1 systemd[1]: Started Session 32 of User ceph-admin.
Nov 24 09:25:55 compute-1 sshd-session[72769]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:55 compute-1 sudo[72773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-1
Nov 24 09:25:55 compute-1 sudo[72773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:55 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:25:55 compute-1 sudo[72773]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:55 compute-1 sudo[72819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:25:55 compute-1 sudo[72819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:55 compute-1 sudo[72819]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:55 compute-1 sudo[72844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Nov 24 09:25:55 compute-1 sudo[72844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:55 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:25:55 compute-1 sudo[72844]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:55 compute-1 sudo[72891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:25:55 compute-1 sudo[72891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:55 compute-1 sudo[72891]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:55 compute-1 sudo[72916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 24 09:25:55 compute-1 sudo[72916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:56 compute-1 sudo[72916]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:56 compute-1 sudo[72976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:25:56 compute-1 sudo[72976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:56 compute-1 sudo[72976]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:56 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:25:56 compute-1 sudo[73001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:25:56 compute-1 sudo[73001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:56 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:25:56 compute-1 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 73039 (sysctl)
Nov 24 09:25:56 compute-1 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 24 09:25:56 compute-1 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 24 09:25:56 compute-1 sudo[73001]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:57 compute-1 sudo[73061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:25:57 compute-1 sudo[73061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:57 compute-1 sudo[73061]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:57 compute-1 sudo[73086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Nov 24 09:25:57 compute-1 sudo[73086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:57 compute-1 sudo[73086]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:57 compute-1 sudo[73130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:25:57 compute-1 sudo[73130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:57 compute-1 sudo[73130]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:57 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:25:57 compute-1 sudo[73155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- inventory --format=json-pretty --filter-for-batch
Nov 24 09:25:57 compute-1 sudo[73155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:57 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:25:57 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:25:59 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat2894534498-lower\x2dmapped.mount: Deactivated successfully.
Nov 24 09:26:13 compute-1 podman[73217]: 2025-11-24 09:26:13.76347605 +0000 UTC m=+15.977551177 container create 86422483aa9765a8ef931e3c87031d2a9c9d7f842a81b0ad094bb3b5d96fb500 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_euler, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 09:26:13 compute-1 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck1531498207-merged.mount: Deactivated successfully.
Nov 24 09:26:13 compute-1 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 24 09:26:13 compute-1 systemd[1]: Started libpod-conmon-86422483aa9765a8ef931e3c87031d2a9c9d7f842a81b0ad094bb3b5d96fb500.scope.
Nov 24 09:26:13 compute-1 podman[73217]: 2025-11-24 09:26:13.749037736 +0000 UTC m=+15.963112893 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:13 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:26:13 compute-1 podman[73217]: 2025-11-24 09:26:13.854716401 +0000 UTC m=+16.068791548 container init 86422483aa9765a8ef931e3c87031d2a9c9d7f842a81b0ad094bb3b5d96fb500 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_euler, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:26:13 compute-1 podman[73217]: 2025-11-24 09:26:13.866533278 +0000 UTC m=+16.080608405 container start 86422483aa9765a8ef931e3c87031d2a9c9d7f842a81b0ad094bb3b5d96fb500 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1)
Nov 24 09:26:13 compute-1 podman[73217]: 2025-11-24 09:26:13.869740598 +0000 UTC m=+16.083815745 container attach 86422483aa9765a8ef931e3c87031d2a9c9d7f842a81b0ad094bb3b5d96fb500 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_euler, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 24 09:26:13 compute-1 reverent_euler[73280]: 167 167
Nov 24 09:26:13 compute-1 systemd[1]: libpod-86422483aa9765a8ef931e3c87031d2a9c9d7f842a81b0ad094bb3b5d96fb500.scope: Deactivated successfully.
Nov 24 09:26:13 compute-1 podman[73217]: 2025-11-24 09:26:13.87296978 +0000 UTC m=+16.087044907 container died 86422483aa9765a8ef931e3c87031d2a9c9d7f842a81b0ad094bb3b5d96fb500 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:26:13 compute-1 systemd[1]: var-lib-containers-storage-overlay-3e00e2893bc3f25bf411230888aca3fcc44a7ba62c69e0bf1259f1dbeddf4fc7-merged.mount: Deactivated successfully.
Nov 24 09:26:13 compute-1 podman[73217]: 2025-11-24 09:26:13.908211645 +0000 UTC m=+16.122286772 container remove 86422483aa9765a8ef931e3c87031d2a9c9d7f842a81b0ad094bb3b5d96fb500 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 24 09:26:13 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:26:13 compute-1 systemd[1]: libpod-conmon-86422483aa9765a8ef931e3c87031d2a9c9d7f842a81b0ad094bb3b5d96fb500.scope: Deactivated successfully.
Nov 24 09:26:14 compute-1 podman[73306]: 2025-11-24 09:26:14.079602819 +0000 UTC m=+0.042952869 container create 501c9508cc468cb706ecd43b7409e1abf530767e5730b0dd6c9129dfc9c0d687 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 09:26:14 compute-1 systemd[1]: Started libpod-conmon-501c9508cc468cb706ecd43b7409e1abf530767e5730b0dd6c9129dfc9c0d687.scope.
Nov 24 09:26:14 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:26:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a674ff89c625a52d8946547f52f06c2b425868a20e7ffd089dfe5be19b5007/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a674ff89c625a52d8946547f52f06c2b425868a20e7ffd089dfe5be19b5007/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:14 compute-1 podman[73306]: 2025-11-24 09:26:14.14850673 +0000 UTC m=+0.111856790 container init 501c9508cc468cb706ecd43b7409e1abf530767e5730b0dd6c9129dfc9c0d687 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_sanderson, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 09:26:14 compute-1 podman[73306]: 2025-11-24 09:26:14.154308906 +0000 UTC m=+0.117658936 container start 501c9508cc468cb706ecd43b7409e1abf530767e5730b0dd6c9129dfc9c0d687 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:26:14 compute-1 podman[73306]: 2025-11-24 09:26:14.157094766 +0000 UTC m=+0.120444806 container attach 501c9508cc468cb706ecd43b7409e1abf530767e5730b0dd6c9129dfc9c0d687 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 09:26:14 compute-1 podman[73306]: 2025-11-24 09:26:14.064176552 +0000 UTC m=+0.027526612 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:14 compute-1 objective_sanderson[73322]: [
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:     {
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:         "available": false,
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:         "being_replaced": false,
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:         "ceph_device_lvm": false,
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:         "lsm_data": {},
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:         "lvs": [],
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:         "path": "/dev/sr0",
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:         "rejected_reasons": [
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             "Insufficient space (<5GB)",
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             "Has a FileSystem"
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:         ],
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:         "sys_api": {
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             "actuators": null,
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             "device_nodes": [
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:                 "sr0"
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             ],
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             "devname": "sr0",
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             "human_readable_size": "482.00 KB",
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             "id_bus": "ata",
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             "model": "QEMU DVD-ROM",
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             "nr_requests": "2",
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             "parent": "/dev/sr0",
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             "partitions": {},
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             "path": "/dev/sr0",
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             "removable": "1",
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             "rev": "2.5+",
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             "ro": "0",
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             "rotational": "1",
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             "sas_address": "",
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             "sas_device_handle": "",
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             "scheduler_mode": "mq-deadline",
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             "sectors": 0,
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             "sectorsize": "2048",
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             "size": 493568.0,
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             "support_discard": "2048",
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             "type": "disk",
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:             "vendor": "QEMU"
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:         }
Nov 24 09:26:14 compute-1 objective_sanderson[73322]:     }
Nov 24 09:26:14 compute-1 objective_sanderson[73322]: ]
Nov 24 09:26:14 compute-1 systemd[1]: libpod-501c9508cc468cb706ecd43b7409e1abf530767e5730b0dd6c9129dfc9c0d687.scope: Deactivated successfully.
Nov 24 09:26:14 compute-1 podman[73306]: 2025-11-24 09:26:14.775897158 +0000 UTC m=+0.739247198 container died 501c9508cc468cb706ecd43b7409e1abf530767e5730b0dd6c9129dfc9c0d687 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 09:26:14 compute-1 systemd[1]: var-lib-containers-storage-overlay-77a674ff89c625a52d8946547f52f06c2b425868a20e7ffd089dfe5be19b5007-merged.mount: Deactivated successfully.
Nov 24 09:26:14 compute-1 podman[73306]: 2025-11-24 09:26:14.819660097 +0000 UTC m=+0.783010137 container remove 501c9508cc468cb706ecd43b7409e1abf530767e5730b0dd6c9129dfc9c0d687 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 09:26:14 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:26:14 compute-1 systemd[1]: libpod-conmon-501c9508cc468cb706ecd43b7409e1abf530767e5730b0dd6c9129dfc9c0d687.scope: Deactivated successfully.
Nov 24 09:26:14 compute-1 sudo[73155]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:14 compute-1 sudo[74325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 24 09:26:14 compute-1 sudo[74325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:14 compute-1 sudo[74325]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:15 compute-1 sudo[74350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph
Nov 24 09:26:15 compute-1 sudo[74350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:15 compute-1 sudo[74350]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:15 compute-1 sudo[74375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:26:15 compute-1 sudo[74375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:15 compute-1 sudo[74375]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:15 compute-1 sudo[74400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:26:15 compute-1 sudo[74400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:15 compute-1 sudo[74400]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:15 compute-1 sudo[74425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:26:15 compute-1 sudo[74425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:15 compute-1 sudo[74425]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:15 compute-1 sudo[74473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:26:15 compute-1 sudo[74473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:15 compute-1 sudo[74473]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:15 compute-1 sudo[74498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:26:15 compute-1 sudo[74498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:15 compute-1 sudo[74498]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:15 compute-1 sudo[74523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 24 09:26:15 compute-1 sudo[74523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:15 compute-1 sudo[74523]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:15 compute-1 sudo[74548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:26:15 compute-1 sudo[74548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:15 compute-1 sudo[74548]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:15 compute-1 sudo[74573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:26:15 compute-1 sudo[74573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:15 compute-1 sudo[74573]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:15 compute-1 sudo[74598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:26:15 compute-1 sudo[74598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:15 compute-1 sudo[74598]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:15 compute-1 sudo[74623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:26:15 compute-1 sudo[74623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:15 compute-1 sudo[74623]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:15 compute-1 sudo[74648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:26:15 compute-1 sudo[74648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:15 compute-1 sudo[74648]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:15 compute-1 sudo[74696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:26:15 compute-1 sudo[74696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:15 compute-1 sudo[74696]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:15 compute-1 sudo[74721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:26:15 compute-1 sudo[74721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:15 compute-1 sudo[74721]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:15 compute-1 sudo[74746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:26:15 compute-1 sudo[74746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:15 compute-1 sudo[74746]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:15 compute-1 sudo[74771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 24 09:26:15 compute-1 sudo[74771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:15 compute-1 sudo[74771]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:15 compute-1 sudo[74796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph
Nov 24 09:26:16 compute-1 sudo[74796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:16 compute-1 sudo[74796]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:16 compute-1 sudo[74821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:26:16 compute-1 sudo[74821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:16 compute-1 sudo[74821]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:16 compute-1 sudo[74846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:26:16 compute-1 sudo[74846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:16 compute-1 sudo[74846]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:16 compute-1 sudo[74871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:26:16 compute-1 sudo[74871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:16 compute-1 sudo[74871]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:16 compute-1 sudo[74919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:26:16 compute-1 sudo[74919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:16 compute-1 sudo[74919]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:16 compute-1 sudo[74944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:26:16 compute-1 sudo[74944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:16 compute-1 sudo[74944]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:16 compute-1 sudo[74969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 24 09:26:16 compute-1 sudo[74969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:16 compute-1 sudo[74969]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:16 compute-1 sudo[74994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:26:16 compute-1 sudo[74994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:16 compute-1 sudo[74994]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:16 compute-1 sudo[75019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:26:16 compute-1 sudo[75019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:16 compute-1 sudo[75019]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:16 compute-1 sudo[75044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:26:16 compute-1 sudo[75044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:16 compute-1 sudo[75044]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:16 compute-1 sudo[75069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:26:16 compute-1 sudo[75069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:16 compute-1 sudo[75069]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:16 compute-1 sudo[75094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:26:16 compute-1 sudo[75094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:16 compute-1 sudo[75094]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:16 compute-1 sudo[75142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:26:16 compute-1 sudo[75142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:16 compute-1 sudo[75142]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:16 compute-1 sudo[75167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:26:16 compute-1 sudo[75167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:16 compute-1 sudo[75167]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:16 compute-1 sudo[75192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:26:16 compute-1 sudo[75192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:16 compute-1 sudo[75192]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:16 compute-1 sudo[75217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:26:16 compute-1 sudo[75217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:17 compute-1 sudo[75217]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:17 compute-1 sudo[75242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:26:17 compute-1 sudo[75242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:17 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:26:17 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:26:17 compute-1 podman[75306]: 2025-11-24 09:26:17.40172466 +0000 UTC m=+0.037586445 container create 0b0431ffa48b4b6a89975a78682580fd250caaad54580e3cfedf55ea68133614 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 24 09:26:17 compute-1 systemd[1]: Started libpod-conmon-0b0431ffa48b4b6a89975a78682580fd250caaad54580e3cfedf55ea68133614.scope.
Nov 24 09:26:17 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:26:17 compute-1 podman[75306]: 2025-11-24 09:26:17.459187163 +0000 UTC m=+0.095048968 container init 0b0431ffa48b4b6a89975a78682580fd250caaad54580e3cfedf55ea68133614 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 09:26:17 compute-1 podman[75306]: 2025-11-24 09:26:17.465206374 +0000 UTC m=+0.101068169 container start 0b0431ffa48b4b6a89975a78682580fd250caaad54580e3cfedf55ea68133614 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_aryabhata, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 24 09:26:17 compute-1 podman[75306]: 2025-11-24 09:26:17.467933903 +0000 UTC m=+0.103795718 container attach 0b0431ffa48b4b6a89975a78682580fd250caaad54580e3cfedf55ea68133614 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_aryabhata, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 09:26:17 compute-1 eager_aryabhata[75322]: 167 167
Nov 24 09:26:17 compute-1 systemd[1]: libpod-0b0431ffa48b4b6a89975a78682580fd250caaad54580e3cfedf55ea68133614.scope: Deactivated successfully.
Nov 24 09:26:17 compute-1 podman[75306]: 2025-11-24 09:26:17.469055671 +0000 UTC m=+0.104917466 container died 0b0431ffa48b4b6a89975a78682580fd250caaad54580e3cfedf55ea68133614 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_aryabhata, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:26:17 compute-1 podman[75306]: 2025-11-24 09:26:17.382942389 +0000 UTC m=+0.018804204 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:17 compute-1 podman[75306]: 2025-11-24 09:26:17.505856055 +0000 UTC m=+0.141717900 container remove 0b0431ffa48b4b6a89975a78682580fd250caaad54580e3cfedf55ea68133614 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True)
Nov 24 09:26:17 compute-1 systemd[1]: libpod-conmon-0b0431ffa48b4b6a89975a78682580fd250caaad54580e3cfedf55ea68133614.scope: Deactivated successfully.
Nov 24 09:26:17 compute-1 systemd[1]: Reloading.
Nov 24 09:26:17 compute-1 systemd-rc-local-generator[75364]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:26:17 compute-1 systemd-sysv-generator[75368]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:26:17 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:26:17 compute-1 systemd[1]: Reloading.
Nov 24 09:26:17 compute-1 systemd-rc-local-generator[75401]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:26:17 compute-1 systemd-sysv-generator[75406]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:26:18 compute-1 systemd[1]: Reached target All Ceph clusters and services.
Nov 24 09:26:18 compute-1 systemd[1]: Reloading.
Nov 24 09:26:18 compute-1 systemd-rc-local-generator[75439]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:26:18 compute-1 systemd-sysv-generator[75443]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:26:18 compute-1 systemd[1]: Reached target Ceph cluster 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:26:18 compute-1 systemd[1]: Reloading.
Nov 24 09:26:18 compute-1 systemd-rc-local-generator[75481]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:26:18 compute-1 systemd-sysv-generator[75484]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:26:18 compute-1 systemd[1]: Reloading.
Nov 24 09:26:18 compute-1 systemd-sysv-generator[75522]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:26:18 compute-1 systemd-rc-local-generator[75518]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:26:18 compute-1 systemd[1]: Created slice Slice /system/ceph-84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:26:18 compute-1 systemd[1]: Reached target System Time Set.
Nov 24 09:26:18 compute-1 systemd[1]: Reached target System Time Synchronized.
Nov 24 09:26:18 compute-1 systemd[1]: Starting Ceph crash.compute-1 for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:26:18 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:26:18 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:26:19 compute-1 podman[75577]: 2025-11-24 09:26:19.135805495 +0000 UTC m=+0.056849569 container create fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:26:19 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2414924c115c84265443f467477ad00ad6ae5d6bfa362464a8ef018c5014825/merged/etc/ceph/ceph.client.crash.compute-1.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:19 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2414924c115c84265443f467477ad00ad6ae5d6bfa362464a8ef018c5014825/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:19 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2414924c115c84265443f467477ad00ad6ae5d6bfa362464a8ef018c5014825/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:19 compute-1 podman[75577]: 2025-11-24 09:26:19.197127225 +0000 UTC m=+0.118171329 container init fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 24 09:26:19 compute-1 podman[75577]: 2025-11-24 09:26:19.104355475 +0000 UTC m=+0.025399629 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:19 compute-1 podman[75577]: 2025-11-24 09:26:19.208245554 +0000 UTC m=+0.129289638 container start fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 24 09:26:19 compute-1 bash[75577]: fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc
Nov 24 09:26:19 compute-1 systemd[1]: Started Ceph crash.compute-1 for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:26:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1[75592]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 24 09:26:19 compute-1 sudo[75242]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1[75592]: 2025-11-24T09:26:19.383+0000 7fa8c0fb7640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 24 09:26:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1[75592]: 2025-11-24T09:26:19.383+0000 7fa8c0fb7640 -1 AuthRegistry(0x7fa8bc069490) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 24 09:26:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1[75592]: 2025-11-24T09:26:19.384+0000 7fa8c0fb7640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 24 09:26:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1[75592]: 2025-11-24T09:26:19.384+0000 7fa8c0fb7640 -1 AuthRegistry(0x7fa8c0fb5ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 24 09:26:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1[75592]: 2025-11-24T09:26:19.386+0000 7fa8ba575640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 24 09:26:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1[75592]: 2025-11-24T09:26:19.386+0000 7fa8c0fb7640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 24 09:26:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1[75592]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 24 09:26:19 compute-1 sudo[75599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:26:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1[75592]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 24 09:26:19 compute-1 sudo[75599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:19 compute-1 sudo[75599]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:19 compute-1 sudo[75634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:26:19 compute-1 sudo[75634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:19 compute-1 podman[75698]: 2025-11-24 09:26:19.896013529 +0000 UTC m=+0.051540865 container create e45eea1b0c5479d823be51e52081bfe11a6fe8a21c56f585f1ebd4774698b781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_lewin, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 09:26:19 compute-1 systemd[1]: Started libpod-conmon-e45eea1b0c5479d823be51e52081bfe11a6fe8a21c56f585f1ebd4774698b781.scope.
Nov 24 09:26:19 compute-1 podman[75698]: 2025-11-24 09:26:19.873944675 +0000 UTC m=+0.029472051 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:19 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:26:19 compute-1 podman[75698]: 2025-11-24 09:26:19.995936779 +0000 UTC m=+0.151464125 container init e45eea1b0c5479d823be51e52081bfe11a6fe8a21c56f585f1ebd4774698b781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:26:20 compute-1 podman[75698]: 2025-11-24 09:26:20.013471349 +0000 UTC m=+0.168998705 container start e45eea1b0c5479d823be51e52081bfe11a6fe8a21c56f585f1ebd4774698b781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_lewin, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 24 09:26:20 compute-1 podman[75698]: 2025-11-24 09:26:20.017816448 +0000 UTC m=+0.173343834 container attach e45eea1b0c5479d823be51e52081bfe11a6fe8a21c56f585f1ebd4774698b781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_lewin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 24 09:26:20 compute-1 kind_lewin[75714]: 167 167
Nov 24 09:26:20 compute-1 systemd[1]: libpod-e45eea1b0c5479d823be51e52081bfe11a6fe8a21c56f585f1ebd4774698b781.scope: Deactivated successfully.
Nov 24 09:26:20 compute-1 podman[75698]: 2025-11-24 09:26:20.024746622 +0000 UTC m=+0.180273968 container died e45eea1b0c5479d823be51e52081bfe11a6fe8a21c56f585f1ebd4774698b781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_lewin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:26:20 compute-1 systemd[1]: var-lib-containers-storage-overlay-974caae571d38a47cfbcaa396eda3a2c08174dfcdd5db0786b9dfebf4f1aa797-merged.mount: Deactivated successfully.
Nov 24 09:26:20 compute-1 podman[75698]: 2025-11-24 09:26:20.057110896 +0000 UTC m=+0.212638242 container remove e45eea1b0c5479d823be51e52081bfe11a6fe8a21c56f585f1ebd4774698b781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_lewin, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 24 09:26:20 compute-1 systemd[1]: libpod-conmon-e45eea1b0c5479d823be51e52081bfe11a6fe8a21c56f585f1ebd4774698b781.scope: Deactivated successfully.
Nov 24 09:26:20 compute-1 podman[75739]: 2025-11-24 09:26:20.254507644 +0000 UTC m=+0.051955947 container create 0885db390a813aa887e3f326c869d1778c785c619223ce958e44394f5d02bf7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Nov 24 09:26:20 compute-1 systemd[1]: Started libpod-conmon-0885db390a813aa887e3f326c869d1778c785c619223ce958e44394f5d02bf7d.scope.
Nov 24 09:26:20 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:26:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ebb3cceb4bf91794d4f6df9aa4625ea9c84d0826f4737606000ffe44b5e21d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ebb3cceb4bf91794d4f6df9aa4625ea9c84d0826f4737606000ffe44b5e21d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ebb3cceb4bf91794d4f6df9aa4625ea9c84d0826f4737606000ffe44b5e21d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ebb3cceb4bf91794d4f6df9aa4625ea9c84d0826f4737606000ffe44b5e21d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ebb3cceb4bf91794d4f6df9aa4625ea9c84d0826f4737606000ffe44b5e21d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:20 compute-1 podman[75739]: 2025-11-24 09:26:20.2360874 +0000 UTC m=+0.033535723 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:20 compute-1 podman[75739]: 2025-11-24 09:26:20.34077266 +0000 UTC m=+0.138220993 container init 0885db390a813aa887e3f326c869d1778c785c619223ce958e44394f5d02bf7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ellis, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 24 09:26:20 compute-1 podman[75739]: 2025-11-24 09:26:20.358519986 +0000 UTC m=+0.155968279 container start 0885db390a813aa887e3f326c869d1778c785c619223ce958e44394f5d02bf7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ellis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 09:26:20 compute-1 podman[75739]: 2025-11-24 09:26:20.362198768 +0000 UTC m=+0.159647071 container attach 0885db390a813aa887e3f326c869d1778c785c619223ce958e44394f5d02bf7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ellis, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Nov 24 09:26:20 compute-1 adoring_ellis[75756]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:26:20 compute-1 adoring_ellis[75756]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 09:26:20 compute-1 adoring_ellis[75756]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 09:26:20 compute-1 adoring_ellis[75756]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new d66edcc6-663b-43db-9331-33ccbb320884
Nov 24 09:26:21 compute-1 adoring_ellis[75756]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 24 09:26:21 compute-1 adoring_ellis[75756]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 24 09:26:21 compute-1 adoring_ellis[75756]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 24 09:26:21 compute-1 lvm[75817]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:26:21 compute-1 lvm[75817]: VG ceph_vg0 finished
Nov 24 09:26:21 compute-1 adoring_ellis[75756]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 24 09:26:21 compute-1 adoring_ellis[75756]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 24 09:26:21 compute-1 adoring_ellis[75756]:  stderr: got monmap epoch 1
Nov 24 09:26:21 compute-1 adoring_ellis[75756]: --> Creating keyring file for osd.1
Nov 24 09:26:21 compute-1 adoring_ellis[75756]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 24 09:26:21 compute-1 adoring_ellis[75756]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 24 09:26:21 compute-1 adoring_ellis[75756]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid d66edcc6-663b-43db-9331-33ccbb320884 --setuser ceph --setgroup ceph
Nov 24 09:26:24 compute-1 adoring_ellis[75756]:  stderr: 2025-11-24T09:26:21.902+0000 7f08c6f48740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Nov 24 09:26:24 compute-1 adoring_ellis[75756]:  stderr: 2025-11-24T09:26:22.175+0000 7f08c6f48740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 24 09:26:24 compute-1 adoring_ellis[75756]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 24 09:26:24 compute-1 adoring_ellis[75756]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 09:26:24 compute-1 adoring_ellis[75756]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 24 09:26:25 compute-1 adoring_ellis[75756]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 24 09:26:25 compute-1 adoring_ellis[75756]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 24 09:26:25 compute-1 adoring_ellis[75756]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 24 09:26:25 compute-1 adoring_ellis[75756]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 09:26:25 compute-1 adoring_ellis[75756]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 24 09:26:25 compute-1 adoring_ellis[75756]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 24 09:26:25 compute-1 systemd[1]: libpod-0885db390a813aa887e3f326c869d1778c785c619223ce958e44394f5d02bf7d.scope: Deactivated successfully.
Nov 24 09:26:25 compute-1 podman[75739]: 2025-11-24 09:26:25.110048768 +0000 UTC m=+4.907497061 container died 0885db390a813aa887e3f326c869d1778c785c619223ce958e44394f5d02bf7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ellis, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 09:26:25 compute-1 systemd[1]: libpod-0885db390a813aa887e3f326c869d1778c785c619223ce958e44394f5d02bf7d.scope: Consumed 2.026s CPU time.
Nov 24 09:26:25 compute-1 systemd[1]: var-lib-containers-storage-overlay-f2ebb3cceb4bf91794d4f6df9aa4625ea9c84d0826f4737606000ffe44b5e21d-merged.mount: Deactivated successfully.
Nov 24 09:26:25 compute-1 podman[75739]: 2025-11-24 09:26:25.168584198 +0000 UTC m=+4.966032491 container remove 0885db390a813aa887e3f326c869d1778c785c619223ce958e44394f5d02bf7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ellis, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 09:26:25 compute-1 systemd[1]: libpod-conmon-0885db390a813aa887e3f326c869d1778c785c619223ce958e44394f5d02bf7d.scope: Deactivated successfully.
Nov 24 09:26:25 compute-1 sudo[75634]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:25 compute-1 sudo[76749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:26:25 compute-1 sudo[76749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:25 compute-1 sudo[76749]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:25 compute-1 sudo[76774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:26:25 compute-1 sudo[76774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:25 compute-1 podman[76839]: 2025-11-24 09:26:25.760539817 +0000 UTC m=+0.032533848 container create 974d887d8ec63bb3f576a65c45805ee3c08c1a81617b456bcb3f3433780984e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:26:25 compute-1 systemd[1]: Started libpod-conmon-974d887d8ec63bb3f576a65c45805ee3c08c1a81617b456bcb3f3433780984e1.scope.
Nov 24 09:26:25 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:26:25 compute-1 podman[76839]: 2025-11-24 09:26:25.84072928 +0000 UTC m=+0.112723331 container init 974d887d8ec63bb3f576a65c45805ee3c08c1a81617b456bcb3f3433780984e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_brattain, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 09:26:25 compute-1 podman[76839]: 2025-11-24 09:26:25.746461643 +0000 UTC m=+0.018455714 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:25 compute-1 podman[76839]: 2025-11-24 09:26:25.852912037 +0000 UTC m=+0.124906078 container start 974d887d8ec63bb3f576a65c45805ee3c08c1a81617b456bcb3f3433780984e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_brattain, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:26:25 compute-1 podman[76839]: 2025-11-24 09:26:25.855965703 +0000 UTC m=+0.127959774 container attach 974d887d8ec63bb3f576a65c45805ee3c08c1a81617b456bcb3f3433780984e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:26:25 compute-1 practical_brattain[76855]: 167 167
Nov 24 09:26:25 compute-1 systemd[1]: libpod-974d887d8ec63bb3f576a65c45805ee3c08c1a81617b456bcb3f3433780984e1.scope: Deactivated successfully.
Nov 24 09:26:25 compute-1 podman[76839]: 2025-11-24 09:26:25.861181615 +0000 UTC m=+0.133175656 container died 974d887d8ec63bb3f576a65c45805ee3c08c1a81617b456bcb3f3433780984e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid)
Nov 24 09:26:25 compute-1 systemd[1]: var-lib-containers-storage-overlay-4fb3d7dac866a2dba0e4a64b776bfd371d7924c57ac7b3e8f7bed9f153e797ec-merged.mount: Deactivated successfully.
Nov 24 09:26:25 compute-1 podman[76839]: 2025-11-24 09:26:25.896528782 +0000 UTC m=+0.168522823 container remove 974d887d8ec63bb3f576a65c45805ee3c08c1a81617b456bcb3f3433780984e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 09:26:25 compute-1 systemd[1]: libpod-conmon-974d887d8ec63bb3f576a65c45805ee3c08c1a81617b456bcb3f3433780984e1.scope: Deactivated successfully.
Nov 24 09:26:26 compute-1 podman[76877]: 2025-11-24 09:26:26.103089261 +0000 UTC m=+0.049551326 container create a9a1f0222160b2ea04a5229a190df843b7554ad8f183807ee1589b169e00c824 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_faraday, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 24 09:26:26 compute-1 systemd[1]: Started libpod-conmon-a9a1f0222160b2ea04a5229a190df843b7554ad8f183807ee1589b169e00c824.scope.
Nov 24 09:26:26 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:26:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c0586c537b675c3a9c16f2ec338bd1a9e9fb6ae1b9c225cac19b32026ee284d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c0586c537b675c3a9c16f2ec338bd1a9e9fb6ae1b9c225cac19b32026ee284d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c0586c537b675c3a9c16f2ec338bd1a9e9fb6ae1b9c225cac19b32026ee284d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c0586c537b675c3a9c16f2ec338bd1a9e9fb6ae1b9c225cac19b32026ee284d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:26 compute-1 podman[76877]: 2025-11-24 09:26:26.085215622 +0000 UTC m=+0.031677727 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:26 compute-1 podman[76877]: 2025-11-24 09:26:26.184977887 +0000 UTC m=+0.131439972 container init a9a1f0222160b2ea04a5229a190df843b7554ad8f183807ee1589b169e00c824 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_faraday, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:26:26 compute-1 podman[76877]: 2025-11-24 09:26:26.192745823 +0000 UTC m=+0.139207888 container start a9a1f0222160b2ea04a5229a190df843b7554ad8f183807ee1589b169e00c824 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_faraday, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 09:26:26 compute-1 podman[76877]: 2025-11-24 09:26:26.195388219 +0000 UTC m=+0.141850284 container attach a9a1f0222160b2ea04a5229a190df843b7554ad8f183807ee1589b169e00c824 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_faraday, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:26:26 compute-1 lucid_faraday[76893]: {
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:     "1": [
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:         {
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:             "devices": [
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:                 "/dev/loop3"
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:             ],
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:             "lv_name": "ceph_lv0",
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:             "lv_size": "21470642176",
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=hKqi78-1PuH-NO5r-o51i-OjPb-2kPE-hGsfdb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d66edcc6-663b-43db-9331-33ccbb320884,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:             "lv_uuid": "hKqi78-1PuH-NO5r-o51i-OjPb-2kPE-hGsfdb",
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:             "name": "ceph_lv0",
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:             "tags": {
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:                 "ceph.block_uuid": "hKqi78-1PuH-NO5r-o51i-OjPb-2kPE-hGsfdb",
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:                 "ceph.cluster_name": "ceph",
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:                 "ceph.crush_device_class": "",
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:                 "ceph.encrypted": "0",
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:                 "ceph.osd_fsid": "d66edcc6-663b-43db-9331-33ccbb320884",
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:                 "ceph.osd_id": "1",
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:                 "ceph.type": "block",
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:                 "ceph.vdo": "0",
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:                 "ceph.with_tpm": "0"
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:             },
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:             "type": "block",
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:             "vg_name": "ceph_vg0"
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:         }
Nov 24 09:26:26 compute-1 lucid_faraday[76893]:     ]
Nov 24 09:26:26 compute-1 lucid_faraday[76893]: }
Nov 24 09:26:26 compute-1 systemd[1]: libpod-a9a1f0222160b2ea04a5229a190df843b7554ad8f183807ee1589b169e00c824.scope: Deactivated successfully.
Nov 24 09:26:26 compute-1 podman[76877]: 2025-11-24 09:26:26.525325605 +0000 UTC m=+0.471787680 container died a9a1f0222160b2ea04a5229a190df843b7554ad8f183807ee1589b169e00c824 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_faraday, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:26:26 compute-1 systemd[1]: var-lib-containers-storage-overlay-7c0586c537b675c3a9c16f2ec338bd1a9e9fb6ae1b9c225cac19b32026ee284d-merged.mount: Deactivated successfully.
Nov 24 09:26:26 compute-1 podman[76877]: 2025-11-24 09:26:26.569248169 +0000 UTC m=+0.515710234 container remove a9a1f0222160b2ea04a5229a190df843b7554ad8f183807ee1589b169e00c824 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 24 09:26:26 compute-1 systemd[1]: libpod-conmon-a9a1f0222160b2ea04a5229a190df843b7554ad8f183807ee1589b169e00c824.scope: Deactivated successfully.
Nov 24 09:26:26 compute-1 sudo[76774]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:26 compute-1 sudo[76913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:26:26 compute-1 sudo[76913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:26 compute-1 sudo[76913]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:26 compute-1 sudo[76938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:26:26 compute-1 sudo[76938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:27 compute-1 podman[77003]: 2025-11-24 09:26:27.195638612 +0000 UTC m=+0.055391633 container create 303874641ec78c5d48a6478f637f8ddfad9ca6205fc4022fb462d019c147a869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 24 09:26:27 compute-1 systemd[1]: Started libpod-conmon-303874641ec78c5d48a6478f637f8ddfad9ca6205fc4022fb462d019c147a869.scope.
Nov 24 09:26:27 compute-1 podman[77003]: 2025-11-24 09:26:27.158751346 +0000 UTC m=+0.018504387 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:27 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:26:27 compute-1 podman[77003]: 2025-11-24 09:26:27.290574177 +0000 UTC m=+0.150327218 container init 303874641ec78c5d48a6478f637f8ddfad9ca6205fc4022fb462d019c147a869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_stonebraker, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 09:26:27 compute-1 podman[77003]: 2025-11-24 09:26:27.299999724 +0000 UTC m=+0.159752755 container start 303874641ec78c5d48a6478f637f8ddfad9ca6205fc4022fb462d019c147a869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 24 09:26:27 compute-1 nifty_stonebraker[77019]: 167 167
Nov 24 09:26:27 compute-1 systemd[1]: libpod-303874641ec78c5d48a6478f637f8ddfad9ca6205fc4022fb462d019c147a869.scope: Deactivated successfully.
Nov 24 09:26:27 compute-1 podman[77003]: 2025-11-24 09:26:27.304528687 +0000 UTC m=+0.164281708 container attach 303874641ec78c5d48a6478f637f8ddfad9ca6205fc4022fb462d019c147a869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_stonebraker, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 24 09:26:27 compute-1 podman[77003]: 2025-11-24 09:26:27.304882386 +0000 UTC m=+0.164635407 container died 303874641ec78c5d48a6478f637f8ddfad9ca6205fc4022fb462d019c147a869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_stonebraker, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:26:27 compute-1 systemd[1]: var-lib-containers-storage-overlay-fdd98df389cd13536f680e84948561840a99ddb1a03038644df2707ad371a1b0-merged.mount: Deactivated successfully.
Nov 24 09:26:27 compute-1 podman[77003]: 2025-11-24 09:26:27.334171691 +0000 UTC m=+0.193924722 container remove 303874641ec78c5d48a6478f637f8ddfad9ca6205fc4022fb462d019c147a869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 24 09:26:27 compute-1 systemd[1]: libpod-conmon-303874641ec78c5d48a6478f637f8ddfad9ca6205fc4022fb462d019c147a869.scope: Deactivated successfully.
Nov 24 09:26:27 compute-1 podman[77049]: 2025-11-24 09:26:27.593176227 +0000 UTC m=+0.044935420 container create eff30742f34dfaf822d4b726018bbf4f9eacfc34f05767b771dde736c2f63c07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate-test, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:26:27 compute-1 systemd[1]: Started libpod-conmon-eff30742f34dfaf822d4b726018bbf4f9eacfc34f05767b771dde736c2f63c07.scope.
Nov 24 09:26:27 compute-1 podman[77049]: 2025-11-24 09:26:27.574391134 +0000 UTC m=+0.026150307 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:27 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:26:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902efd4e410547f0ca0ccb2ccf00893b983276228e51421bc6bf120c16741a8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902efd4e410547f0ca0ccb2ccf00893b983276228e51421bc6bf120c16741a8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902efd4e410547f0ca0ccb2ccf00893b983276228e51421bc6bf120c16741a8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902efd4e410547f0ca0ccb2ccf00893b983276228e51421bc6bf120c16741a8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902efd4e410547f0ca0ccb2ccf00893b983276228e51421bc6bf120c16741a8a/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:27 compute-1 podman[77049]: 2025-11-24 09:26:27.699883057 +0000 UTC m=+0.151642230 container init eff30742f34dfaf822d4b726018bbf4f9eacfc34f05767b771dde736c2f63c07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate-test, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:26:27 compute-1 podman[77049]: 2025-11-24 09:26:27.711480628 +0000 UTC m=+0.163239781 container start eff30742f34dfaf822d4b726018bbf4f9eacfc34f05767b771dde736c2f63c07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:26:27 compute-1 podman[77049]: 2025-11-24 09:26:27.717255804 +0000 UTC m=+0.169014977 container attach eff30742f34dfaf822d4b726018bbf4f9eacfc34f05767b771dde736c2f63c07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 24 09:26:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate-test[77065]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Nov 24 09:26:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate-test[77065]:                             [--no-systemd] [--no-tmpfs]
Nov 24 09:26:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate-test[77065]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 24 09:26:27 compute-1 systemd[1]: libpod-eff30742f34dfaf822d4b726018bbf4f9eacfc34f05767b771dde736c2f63c07.scope: Deactivated successfully.
Nov 24 09:26:27 compute-1 podman[77049]: 2025-11-24 09:26:27.892783262 +0000 UTC m=+0.344542455 container died eff30742f34dfaf822d4b726018bbf4f9eacfc34f05767b771dde736c2f63c07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate-test, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:26:27 compute-1 systemd[1]: var-lib-containers-storage-overlay-902efd4e410547f0ca0ccb2ccf00893b983276228e51421bc6bf120c16741a8a-merged.mount: Deactivated successfully.
Nov 24 09:26:27 compute-1 podman[77049]: 2025-11-24 09:26:27.94445899 +0000 UTC m=+0.396218153 container remove eff30742f34dfaf822d4b726018bbf4f9eacfc34f05767b771dde736c2f63c07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate-test, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:26:27 compute-1 systemd[1]: libpod-conmon-eff30742f34dfaf822d4b726018bbf4f9eacfc34f05767b771dde736c2f63c07.scope: Deactivated successfully.
Nov 24 09:26:28 compute-1 systemd[1]: Reloading.
Nov 24 09:26:28 compute-1 systemd-rc-local-generator[77127]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:26:28 compute-1 systemd-sysv-generator[77130]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:26:28 compute-1 systemd[1]: Reloading.
Nov 24 09:26:28 compute-1 systemd-rc-local-generator[77167]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:26:28 compute-1 systemd-sysv-generator[77171]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:26:28 compute-1 systemd[1]: Starting Ceph osd.1 for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:26:28 compute-1 podman[77224]: 2025-11-24 09:26:28.979519987 +0000 UTC m=+0.038492147 container create 37bf49d7886b6993125c7ee1fc06d5dcca821dd64db4bc80ab466d27f416722f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Nov 24 09:26:29 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:26:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5772f31678217f9ce719efcf80e6d8557ba541c46675bc0ad345094d391fdb2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5772f31678217f9ce719efcf80e6d8557ba541c46675bc0ad345094d391fdb2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5772f31678217f9ce719efcf80e6d8557ba541c46675bc0ad345094d391fdb2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5772f31678217f9ce719efcf80e6d8557ba541c46675bc0ad345094d391fdb2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5772f31678217f9ce719efcf80e6d8557ba541c46675bc0ad345094d391fdb2b/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:29 compute-1 podman[77224]: 2025-11-24 09:26:29.054186232 +0000 UTC m=+0.113158402 container init 37bf49d7886b6993125c7ee1fc06d5dcca821dd64db4bc80ab466d27f416722f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 24 09:26:29 compute-1 podman[77224]: 2025-11-24 09:26:28.961338831 +0000 UTC m=+0.020311021 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:29 compute-1 podman[77224]: 2025-11-24 09:26:29.062106992 +0000 UTC m=+0.121079152 container start 37bf49d7886b6993125c7ee1fc06d5dcca821dd64db4bc80ab466d27f416722f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default)
Nov 24 09:26:29 compute-1 podman[77224]: 2025-11-24 09:26:29.065104777 +0000 UTC m=+0.124076957 container attach 37bf49d7886b6993125c7ee1fc06d5dcca821dd64db4bc80ab466d27f416722f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 09:26:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate[77238]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 09:26:29 compute-1 bash[77224]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 09:26:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate[77238]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 09:26:29 compute-1 bash[77224]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 09:26:29 compute-1 lvm[77319]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:26:29 compute-1 lvm[77319]: VG ceph_vg0 finished
Nov 24 09:26:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate[77238]: --> Failed to activate via raw: did not find any matching OSD to activate
Nov 24 09:26:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate[77238]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 09:26:29 compute-1 bash[77224]: --> Failed to activate via raw: did not find any matching OSD to activate
Nov 24 09:26:29 compute-1 bash[77224]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 09:26:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate[77238]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 09:26:29 compute-1 bash[77224]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 09:26:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate[77238]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 09:26:29 compute-1 bash[77224]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 09:26:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate[77238]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 24 09:26:29 compute-1 bash[77224]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 24 09:26:30 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate[77238]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 24 09:26:30 compute-1 bash[77224]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 24 09:26:30 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate[77238]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 24 09:26:30 compute-1 bash[77224]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 24 09:26:30 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate[77238]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 24 09:26:30 compute-1 bash[77224]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 24 09:26:30 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate[77238]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 09:26:30 compute-1 bash[77224]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 09:26:30 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate[77238]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 24 09:26:30 compute-1 bash[77224]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 24 09:26:30 compute-1 systemd[1]: libpod-37bf49d7886b6993125c7ee1fc06d5dcca821dd64db4bc80ab466d27f416722f.scope: Deactivated successfully.
Nov 24 09:26:30 compute-1 systemd[1]: libpod-37bf49d7886b6993125c7ee1fc06d5dcca821dd64db4bc80ab466d27f416722f.scope: Consumed 1.267s CPU time.
Nov 24 09:26:30 compute-1 podman[77224]: 2025-11-24 09:26:30.2691983 +0000 UTC m=+1.328170480 container died 37bf49d7886b6993125c7ee1fc06d5dcca821dd64db4bc80ab466d27f416722f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 24 09:26:30 compute-1 systemd[1]: var-lib-containers-storage-overlay-5772f31678217f9ce719efcf80e6d8557ba541c46675bc0ad345094d391fdb2b-merged.mount: Deactivated successfully.
Nov 24 09:26:30 compute-1 podman[77224]: 2025-11-24 09:26:30.318892888 +0000 UTC m=+1.377865048 container remove 37bf49d7886b6993125c7ee1fc06d5dcca821dd64db4bc80ab466d27f416722f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:26:30 compute-1 podman[77477]: 2025-11-24 09:26:30.500633252 +0000 UTC m=+0.034901156 container create 074006852d3bf59eab891b01d7d44e37ff455737962785dcf590ba91b5bd182c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 24 09:26:30 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20364e1a80cc0f2dd656806b3ef56b9a4afe6fbb399c54acd1c2245f9bd2d8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:30 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20364e1a80cc0f2dd656806b3ef56b9a4afe6fbb399c54acd1c2245f9bd2d8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:30 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20364e1a80cc0f2dd656806b3ef56b9a4afe6fbb399c54acd1c2245f9bd2d8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:30 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20364e1a80cc0f2dd656806b3ef56b9a4afe6fbb399c54acd1c2245f9bd2d8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:30 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20364e1a80cc0f2dd656806b3ef56b9a4afe6fbb399c54acd1c2245f9bd2d8c/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:30 compute-1 podman[77477]: 2025-11-24 09:26:30.559267905 +0000 UTC m=+0.093535829 container init 074006852d3bf59eab891b01d7d44e37ff455737962785dcf590ba91b5bd182c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:26:30 compute-1 podman[77477]: 2025-11-24 09:26:30.566097837 +0000 UTC m=+0.100365741 container start 074006852d3bf59eab891b01d7d44e37ff455737962785dcf590ba91b5bd182c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 24 09:26:30 compute-1 bash[77477]: 074006852d3bf59eab891b01d7d44e37ff455737962785dcf590ba91b5bd182c
Nov 24 09:26:30 compute-1 podman[77477]: 2025-11-24 09:26:30.485979645 +0000 UTC m=+0.020247569 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:30 compute-1 systemd[1]: Started Ceph osd.1 for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:26:30 compute-1 ceph-osd[77497]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 09:26:30 compute-1 ceph-osd[77497]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Nov 24 09:26:30 compute-1 ceph-osd[77497]: pidfile_write: ignore empty --pid-file
Nov 24 09:26:30 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 09:26:30 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 09:26:30 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:30 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:30 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 09:26:30 compute-1 sudo[76938]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:30 compute-1 sudo[77509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:26:30 compute-1 sudo[77509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:30 compute-1 sudo[77509]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:30 compute-1 sudo[77534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:26:30 compute-1 sudo[77534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:30 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 09:26:30 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 09:26:30 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:30 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:30 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 09:26:30 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 09:26:30 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 09:26:30 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:30 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:30 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 09:26:30 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 09:26:30 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 09:26:30 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:30 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:30 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 09:26:31 compute-1 podman[77604]: 2025-11-24 09:26:31.0786234 +0000 UTC m=+0.041635696 container create 8d87c7fd53e68ed8de5e5727a2a14e8bcf403eee0225f7664f8e8891170e18bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:26:31 compute-1 systemd[1]: Started libpod-conmon-8d87c7fd53e68ed8de5e5727a2a14e8bcf403eee0225f7664f8e8891170e18bb.scope.
Nov 24 09:26:31 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:26:31 compute-1 podman[77604]: 2025-11-24 09:26:31.061466679 +0000 UTC m=+0.024478995 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:31 compute-1 podman[77604]: 2025-11-24 09:26:31.165233455 +0000 UTC m=+0.128245771 container init 8d87c7fd53e68ed8de5e5727a2a14e8bcf403eee0225f7664f8e8891170e18bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_engelbart, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:26:31 compute-1 podman[77604]: 2025-11-24 09:26:31.173099893 +0000 UTC m=+0.136112189 container start 8d87c7fd53e68ed8de5e5727a2a14e8bcf403eee0225f7664f8e8891170e18bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_engelbart, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 09:26:31 compute-1 podman[77604]: 2025-11-24 09:26:31.176955349 +0000 UTC m=+0.139967695 container attach 8d87c7fd53e68ed8de5e5727a2a14e8bcf403eee0225f7664f8e8891170e18bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_engelbart, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:26:31 compute-1 quizzical_engelbart[77620]: 167 167
Nov 24 09:26:31 compute-1 systemd[1]: libpod-8d87c7fd53e68ed8de5e5727a2a14e8bcf403eee0225f7664f8e8891170e18bb.scope: Deactivated successfully.
Nov 24 09:26:31 compute-1 podman[77604]: 2025-11-24 09:26:31.180338295 +0000 UTC m=+0.143350591 container died 8d87c7fd53e68ed8de5e5727a2a14e8bcf403eee0225f7664f8e8891170e18bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 24 09:26:31 compute-1 systemd[1]: var-lib-containers-storage-overlay-9f0692e4056a89d5cae3738aa49d0bc670841366a357395f6a463bee39170503-merged.mount: Deactivated successfully.
Nov 24 09:26:31 compute-1 podman[77604]: 2025-11-24 09:26:31.212167695 +0000 UTC m=+0.175179991 container remove 8d87c7fd53e68ed8de5e5727a2a14e8bcf403eee0225f7664f8e8891170e18bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_engelbart, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 24 09:26:31 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 09:26:31 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 09:26:31 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:31 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:31 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 09:26:31 compute-1 systemd[1]: libpod-conmon-8d87c7fd53e68ed8de5e5727a2a14e8bcf403eee0225f7664f8e8891170e18bb.scope: Deactivated successfully.
Nov 24 09:26:31 compute-1 podman[77644]: 2025-11-24 09:26:31.380036591 +0000 UTC m=+0.066357198 container create 17dfbbe4b6f75722be508611871aae43401626e6c06637de2a8125a332576de7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 24 09:26:31 compute-1 systemd[1]: Started libpod-conmon-17dfbbe4b6f75722be508611871aae43401626e6c06637de2a8125a332576de7.scope.
Nov 24 09:26:31 compute-1 podman[77644]: 2025-11-24 09:26:31.341747019 +0000 UTC m=+0.028067696 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:31 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:26:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a698d9f57c0dd27f68e60b82299f051b559413dc61c1e6ca557e67600eabca0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a698d9f57c0dd27f68e60b82299f051b559413dc61c1e6ca557e67600eabca0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a698d9f57c0dd27f68e60b82299f051b559413dc61c1e6ca557e67600eabca0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a698d9f57c0dd27f68e60b82299f051b559413dc61c1e6ca557e67600eabca0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:31 compute-1 podman[77644]: 2025-11-24 09:26:31.485561371 +0000 UTC m=+0.171881948 container init 17dfbbe4b6f75722be508611871aae43401626e6c06637de2a8125a332576de7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_lamarr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 24 09:26:31 compute-1 podman[77644]: 2025-11-24 09:26:31.495758827 +0000 UTC m=+0.182079394 container start 17dfbbe4b6f75722be508611871aae43401626e6c06637de2a8125a332576de7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_lamarr, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:26:31 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 09:26:31 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 09:26:31 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:31 compute-1 podman[77644]: 2025-11-24 09:26:31.499192064 +0000 UTC m=+0.185512651 container attach 17dfbbe4b6f75722be508611871aae43401626e6c06637de2a8125a332576de7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 24 09:26:31 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:31 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 09:26:31 compute-1 ceph-osd[77497]: bdev(0x5634bb945c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 09:26:31 compute-1 ceph-osd[77497]: bdev(0x5634bb945c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 09:26:31 compute-1 ceph-osd[77497]: bdev(0x5634bb945c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:31 compute-1 ceph-osd[77497]: bdev(0x5634bb945c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:31 compute-1 ceph-osd[77497]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 24 09:26:31 compute-1 ceph-osd[77497]: bdev(0x5634bb945c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 09:26:31 compute-1 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 09:26:32 compute-1 ceph-osd[77497]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 24 09:26:32 compute-1 ceph-osd[77497]: load: jerasure load: lrc 
Nov 24 09:26:32 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 09:26:32 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 09:26:32 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:32 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:32 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 09:26:32 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 09:26:32 compute-1 lvm[77743]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:26:32 compute-1 lvm[77743]: VG ceph_vg0 finished
Nov 24 09:26:32 compute-1 interesting_lamarr[77660]: {}
Nov 24 09:26:32 compute-1 systemd[1]: libpod-17dfbbe4b6f75722be508611871aae43401626e6c06637de2a8125a332576de7.scope: Deactivated successfully.
Nov 24 09:26:32 compute-1 systemd[1]: libpod-17dfbbe4b6f75722be508611871aae43401626e6c06637de2a8125a332576de7.scope: Consumed 1.171s CPU time.
Nov 24 09:26:32 compute-1 podman[77746]: 2025-11-24 09:26:32.278866036 +0000 UTC m=+0.026075495 container died 17dfbbe4b6f75722be508611871aae43401626e6c06637de2a8125a332576de7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_lamarr, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 09:26:32 compute-1 systemd[1]: var-lib-containers-storage-overlay-0a698d9f57c0dd27f68e60b82299f051b559413dc61c1e6ca557e67600eabca0-merged.mount: Deactivated successfully.
Nov 24 09:26:32 compute-1 podman[77746]: 2025-11-24 09:26:32.31075819 +0000 UTC m=+0.057967619 container remove 17dfbbe4b6f75722be508611871aae43401626e6c06637de2a8125a332576de7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 09:26:32 compute-1 systemd[1]: libpod-conmon-17dfbbe4b6f75722be508611871aae43401626e6c06637de2a8125a332576de7.scope: Deactivated successfully.
Nov 24 09:26:32 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 09:26:32 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 09:26:32 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:32 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:32 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 09:26:32 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 09:26:32 compute-1 sudo[77534]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:32 compute-1 ceph-osd[77497]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 24 09:26:32 compute-1 ceph-osd[77497]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 24 09:26:32 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 09:26:32 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 09:26:32 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:32 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:32 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 09:26:32 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 09:26:32 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 09:26:32 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:32 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:32 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 09:26:33 compute-1 sudo[77776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:26:33 compute-1 sudo[77776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:33 compute-1 sudo[77776]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bdev(0x5634bc7e1000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bdev(0x5634bc7e1000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bdev(0x5634bc7e1000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bdev(0x5634bc7e1000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluefs mount
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluefs mount shared_bdev_used = 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: RocksDB version: 7.9.2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Git sha 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: DB SUMMARY
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: DB Session ID:  02OCALY6G9ZWHVIYWD2O
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: CURRENT file:  CURRENT
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                         Options.error_if_exists: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.create_if_missing: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                                     Options.env: 0x5634bc7b1dc0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                                Options.info_log: 0x5634bc7b57a0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                              Options.statistics: (nil)
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.use_fsync: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                              Options.db_log_dir: 
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                                 Options.wal_dir: db.wal
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.write_buffer_manager: 0x5634bc8aaa00
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.unordered_write: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.row_cache: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                              Options.wal_filter: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.two_write_queues: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.wal_compression: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.atomic_flush: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.max_background_jobs: 4
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.max_background_compactions: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.max_subcompactions: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.max_open_files: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Compression algorithms supported:
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         kZSTD supported: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         kXpressCompression supported: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         kBZip2Compression supported: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         kLZ4Compression supported: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         kZlibCompression supported: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         kLZ4HCCompression supported: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         kSnappyCompression supported: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5634bb9db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5634bb9db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5634bb9db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5634bb9db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5634bb9db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5634bb9db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5634bb9db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5634bb9da9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5634bb9da9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5634bb9da9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bb65118b-1a2a-4ee6-a16f-d5932cc21adb
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976393161976, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976393162192, "job": 1, "event": "recovery_finished"}
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: freelist init
Nov 24 09:26:33 compute-1 ceph-osd[77497]: freelist _read_cfg
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluefs umount
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bdev(0x5634bc7e1000 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 09:26:33 compute-1 sudo[77990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:26:33 compute-1 sudo[77990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:33 compute-1 sudo[77990]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:33 compute-1 sudo[78015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 24 09:26:33 compute-1 sudo[78015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bdev(0x5634bc7e1000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bdev(0x5634bc7e1000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bdev(0x5634bc7e1000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bdev(0x5634bc7e1000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluefs mount
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluefs mount shared_bdev_used = 4718592
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: RocksDB version: 7.9.2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Git sha 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: DB SUMMARY
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: DB Session ID:  02OCALY6G9ZWHVIYWD2P
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: CURRENT file:  CURRENT
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                         Options.error_if_exists: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.create_if_missing: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                                     Options.env: 0x5634bc94e310
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                                Options.info_log: 0x5634bc7b5b20
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                              Options.statistics: (nil)
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.use_fsync: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                              Options.db_log_dir: 
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                                 Options.wal_dir: db.wal
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.write_buffer_manager: 0x5634bc8aaa00
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.unordered_write: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.row_cache: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                              Options.wal_filter: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.two_write_queues: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.wal_compression: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.atomic_flush: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.max_background_jobs: 4
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.max_background_compactions: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.max_subcompactions: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.max_open_files: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Compression algorithms supported:
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         kZSTD supported: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         kXpressCompression supported: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         kBZip2Compression supported: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         kLZ4Compression supported: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         kZlibCompression supported: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         kLZ4HCCompression supported: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         kSnappyCompression supported: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5634bb9db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5634bb9db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5634bb9db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5634bb9db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5634bb9db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5634bb9db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5634bb9db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5634bb9da9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5634bb9da9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5634bb9da9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bb65118b-1a2a-4ee6-a16f-d5932cc21adb
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976393442615, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976393449012, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976393, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bb65118b-1a2a-4ee6-a16f-d5932cc21adb", "db_session_id": "02OCALY6G9ZWHVIYWD2P", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976393451680, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976393, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bb65118b-1a2a-4ee6-a16f-d5932cc21adb", "db_session_id": "02OCALY6G9ZWHVIYWD2P", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976393453849, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976393, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bb65118b-1a2a-4ee6-a16f-d5932cc21adb", "db_session_id": "02OCALY6G9ZWHVIYWD2P", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976393455263, "job": 1, "event": "recovery_finished"}
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5634bc9b2000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: DB pointer 0x5634bc95c000
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 24 09:26:33 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 09:26:33 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9da9b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9da9b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9da9b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 09:26:33 compute-1 ceph-osd[77497]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 24 09:26:33 compute-1 ceph-osd[77497]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 24 09:26:33 compute-1 ceph-osd[77497]: _get_class not permitted to load lua
Nov 24 09:26:33 compute-1 ceph-osd[77497]: _get_class not permitted to load sdk
Nov 24 09:26:33 compute-1 ceph-osd[77497]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 24 09:26:33 compute-1 ceph-osd[77497]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 24 09:26:33 compute-1 ceph-osd[77497]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 24 09:26:33 compute-1 ceph-osd[77497]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 24 09:26:33 compute-1 ceph-osd[77497]: osd.1 0 load_pgs
Nov 24 09:26:33 compute-1 ceph-osd[77497]: osd.1 0 load_pgs opened 0 pgs
Nov 24 09:26:33 compute-1 ceph-osd[77497]: osd.1 0 log_to_monitors true
Nov 24 09:26:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1[77493]: 2025-11-24T09:26:33.478+0000 7f344d029740 -1 osd.1 0 log_to_monitors true
Nov 24 09:26:33 compute-1 podman[78331]: 2025-11-24 09:26:33.888366658 +0000 UTC m=+0.054157862 container exec fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 24 09:26:33 compute-1 podman[78331]: 2025-11-24 09:26:33.999820374 +0000 UTC m=+0.165611578 container exec_died fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:26:34 compute-1 sudo[78015]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:34 compute-1 sudo[78380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:26:34 compute-1 sudo[78380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:34 compute-1 sudo[78380]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:34 compute-1 sudo[78405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- inventory --format=json-pretty --filter-for-batch
Nov 24 09:26:34 compute-1 sudo[78405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:34 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 24 09:26:34 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 24 09:26:34 compute-1 podman[78470]: 2025-11-24 09:26:34.672164834 +0000 UTC m=+0.040493242 container create c15201c72d9e209f06e336496e0d7bd82929f981090144ce3a0bdefd0098aed9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_kare, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 09:26:34 compute-1 systemd[1]: Started libpod-conmon-c15201c72d9e209f06e336496e0d7bd82929f981090144ce3a0bdefd0098aed9.scope.
Nov 24 09:26:34 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:26:34 compute-1 podman[78470]: 2025-11-24 09:26:34.654012192 +0000 UTC m=+0.022340630 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:34 compute-1 podman[78470]: 2025-11-24 09:26:34.7636407 +0000 UTC m=+0.131969158 container init c15201c72d9e209f06e336496e0d7bd82929f981090144ce3a0bdefd0098aed9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 09:26:34 compute-1 podman[78470]: 2025-11-24 09:26:34.771694825 +0000 UTC m=+0.140023233 container start c15201c72d9e209f06e336496e0d7bd82929f981090144ce3a0bdefd0098aed9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_kare, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 24 09:26:34 compute-1 podman[78470]: 2025-11-24 09:26:34.775337119 +0000 UTC m=+0.143665547 container attach c15201c72d9e209f06e336496e0d7bd82929f981090144ce3a0bdefd0098aed9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:26:34 compute-1 gracious_kare[78486]: 167 167
Nov 24 09:26:34 compute-1 systemd[1]: libpod-c15201c72d9e209f06e336496e0d7bd82929f981090144ce3a0bdefd0098aed9.scope: Deactivated successfully.
Nov 24 09:26:34 compute-1 podman[78470]: 2025-11-24 09:26:34.778491575 +0000 UTC m=+0.146819983 container died c15201c72d9e209f06e336496e0d7bd82929f981090144ce3a0bdefd0098aed9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_kare, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 09:26:34 compute-1 systemd[1]: var-lib-containers-storage-overlay-d72de2e3386eb9030aa324f237758c4afe4b791f970f0aefa030a2fc25dee66d-merged.mount: Deactivated successfully.
Nov 24 09:26:34 compute-1 podman[78470]: 2025-11-24 09:26:34.810818538 +0000 UTC m=+0.179146946 container remove c15201c72d9e209f06e336496e0d7bd82929f981090144ce3a0bdefd0098aed9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 09:26:34 compute-1 systemd[1]: libpod-conmon-c15201c72d9e209f06e336496e0d7bd82929f981090144ce3a0bdefd0098aed9.scope: Deactivated successfully.
Nov 24 09:26:34 compute-1 podman[78511]: 2025-11-24 09:26:34.986731945 +0000 UTC m=+0.062759477 container create 0202949abe1f024b558f56c18741e5987044a2f7ee566683a619f49212517032 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 24 09:26:35 compute-1 systemd[1]: Started libpod-conmon-0202949abe1f024b558f56c18741e5987044a2f7ee566683a619f49212517032.scope.
Nov 24 09:26:35 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:26:35 compute-1 podman[78511]: 2025-11-24 09:26:34.952865181 +0000 UTC m=+0.028892783 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a1bf355fd80ae25b5251dda1c4c8fac9704308a181e82422f4d636960975f58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a1bf355fd80ae25b5251dda1c4c8fac9704308a181e82422f4d636960975f58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a1bf355fd80ae25b5251dda1c4c8fac9704308a181e82422f4d636960975f58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a1bf355fd80ae25b5251dda1c4c8fac9704308a181e82422f4d636960975f58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:35 compute-1 podman[78511]: 2025-11-24 09:26:35.059897417 +0000 UTC m=+0.135924949 container init 0202949abe1f024b558f56c18741e5987044a2f7ee566683a619f49212517032 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_galileo, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 09:26:35 compute-1 podman[78511]: 2025-11-24 09:26:35.068190987 +0000 UTC m=+0.144218489 container start 0202949abe1f024b558f56c18741e5987044a2f7ee566683a619f49212517032 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 09:26:35 compute-1 podman[78511]: 2025-11-24 09:26:35.071238519 +0000 UTC m=+0.147266041 container attach 0202949abe1f024b558f56c18741e5987044a2f7ee566683a619f49212517032 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_galileo, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 09:26:35 compute-1 ceph-osd[77497]: osd.1 0 done with init, starting boot process
Nov 24 09:26:35 compute-1 ceph-osd[77497]: osd.1 0 start_boot
Nov 24 09:26:35 compute-1 ceph-osd[77497]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 24 09:26:35 compute-1 ceph-osd[77497]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 24 09:26:35 compute-1 ceph-osd[77497]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 24 09:26:35 compute-1 ceph-osd[77497]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 24 09:26:35 compute-1 ceph-osd[77497]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 24 09:26:35 compute-1 naughty_galileo[78527]: [
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:     {
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:         "available": false,
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:         "being_replaced": false,
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:         "ceph_device_lvm": false,
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:         "lsm_data": {},
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:         "lvs": [],
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:         "path": "/dev/sr0",
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:         "rejected_reasons": [
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             "Has a FileSystem",
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             "Insufficient space (<5GB)"
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:         ],
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:         "sys_api": {
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             "actuators": null,
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             "device_nodes": [
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:                 "sr0"
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             ],
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             "devname": "sr0",
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             "human_readable_size": "482.00 KB",
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             "id_bus": "ata",
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             "model": "QEMU DVD-ROM",
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             "nr_requests": "2",
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             "parent": "/dev/sr0",
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             "partitions": {},
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             "path": "/dev/sr0",
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             "removable": "1",
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             "rev": "2.5+",
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             "ro": "0",
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             "rotational": "1",
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             "sas_address": "",
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             "sas_device_handle": "",
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             "scheduler_mode": "mq-deadline",
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             "sectors": 0,
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             "sectorsize": "2048",
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             "size": 493568.0,
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             "support_discard": "2048",
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             "type": "disk",
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:             "vendor": "QEMU"
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:         }
Nov 24 09:26:35 compute-1 naughty_galileo[78527]:     }
Nov 24 09:26:35 compute-1 naughty_galileo[78527]: ]
Nov 24 09:26:35 compute-1 systemd[1]: libpod-0202949abe1f024b558f56c18741e5987044a2f7ee566683a619f49212517032.scope: Deactivated successfully.
Nov 24 09:26:35 compute-1 podman[79665]: 2025-11-24 09:26:35.764668591 +0000 UTC m=+0.020755127 container died 0202949abe1f024b558f56c18741e5987044a2f7ee566683a619f49212517032 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_galileo, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 09:26:37 compute-1 systemd[1]: var-lib-containers-storage-overlay-7a1bf355fd80ae25b5251dda1c4c8fac9704308a181e82422f4d636960975f58-merged.mount: Deactivated successfully.
Nov 24 09:26:37 compute-1 podman[79665]: 2025-11-24 09:26:37.544896145 +0000 UTC m=+1.800982651 container remove 0202949abe1f024b558f56c18741e5987044a2f7ee566683a619f49212517032 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_galileo, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 09:26:37 compute-1 systemd[1]: libpod-conmon-0202949abe1f024b558f56c18741e5987044a2f7ee566683a619f49212517032.scope: Deactivated successfully.
Nov 24 09:26:37 compute-1 sudo[78405]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:39 compute-1 ceph-osd[77497]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 42.922 iops: 10987.995 elapsed_sec: 0.273
Nov 24 09:26:39 compute-1 ceph-osd[77497]: log_channel(cluster) log [WRN] : OSD bench result of 10987.994700 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 09:26:39 compute-1 ceph-osd[77497]: osd.1 0 waiting for initial osdmap
Nov 24 09:26:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1[77493]: 2025-11-24T09:26:39.418+0000 7f34497bf640 -1 osd.1 0 waiting for initial osdmap
Nov 24 09:26:39 compute-1 ceph-osd[77497]: osd.1 7 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 24 09:26:39 compute-1 ceph-osd[77497]: osd.1 7 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 24 09:26:39 compute-1 ceph-osd[77497]: osd.1 7 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 24 09:26:39 compute-1 ceph-osd[77497]: osd.1 7 check_osdmap_features require_osd_release unknown -> squid
Nov 24 09:26:39 compute-1 ceph-osd[77497]: osd.1 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 24 09:26:39 compute-1 ceph-osd[77497]: osd.1 7 set_numa_affinity not setting numa affinity
Nov 24 09:26:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1[77493]: 2025-11-24T09:26:39.437+0000 7f34445d4640 -1 osd.1 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 24 09:26:39 compute-1 ceph-osd[77497]: osd.1 7 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Nov 24 09:26:39 compute-1 ceph-osd[77497]: osd.1 8 state: booting -> active
Nov 24 09:26:41 compute-1 ceph-osd[77497]: osd.1 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 24 09:26:41 compute-1 ceph-osd[77497]: osd.1 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 24 09:26:41 compute-1 ceph-osd[77497]: osd.1 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 24 09:26:41 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 10 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=10) [1] r=0 lpr=10 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:26:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 11 pg[1.0( empty local-lis/les=10/11 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=10) [1] r=0 lpr=10 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:00 compute-1 sudo[79683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:27:00 compute-1 sudo[79683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:00 compute-1 sudo[79683]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:01 compute-1 sudo[79708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:01 compute-1 sudo[79708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:01 compute-1 podman[79776]: 2025-11-24 09:27:01.475929256 +0000 UTC m=+0.039394180 container create 5744dab53c48ab3af9b7949ad0ace9e88723809e6401c7cee90eefa6175b131c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_lalande, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 09:27:01 compute-1 systemd[1]: Started libpod-conmon-5744dab53c48ab3af9b7949ad0ace9e88723809e6401c7cee90eefa6175b131c.scope.
Nov 24 09:27:01 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:27:01 compute-1 podman[79776]: 2025-11-24 09:27:01.548264809 +0000 UTC m=+0.111729753 container init 5744dab53c48ab3af9b7949ad0ace9e88723809e6401c7cee90eefa6175b131c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:27:01 compute-1 podman[79776]: 2025-11-24 09:27:01.459082069 +0000 UTC m=+0.022547013 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:27:01 compute-1 podman[79776]: 2025-11-24 09:27:01.55853958 +0000 UTC m=+0.122004514 container start 5744dab53c48ab3af9b7949ad0ace9e88723809e6401c7cee90eefa6175b131c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_lalande, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 09:27:01 compute-1 podman[79776]: 2025-11-24 09:27:01.562198985 +0000 UTC m=+0.125663929 container attach 5744dab53c48ab3af9b7949ad0ace9e88723809e6401c7cee90eefa6175b131c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_lalande, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 24 09:27:01 compute-1 silly_lalande[79792]: 167 167
Nov 24 09:27:01 compute-1 systemd[1]: libpod-5744dab53c48ab3af9b7949ad0ace9e88723809e6401c7cee90eefa6175b131c.scope: Deactivated successfully.
Nov 24 09:27:01 compute-1 podman[79776]: 2025-11-24 09:27:01.564945321 +0000 UTC m=+0.128410245 container died 5744dab53c48ab3af9b7949ad0ace9e88723809e6401c7cee90eefa6175b131c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:27:01 compute-1 systemd[1]: var-lib-containers-storage-overlay-206c5dbe4a6485aef50f2e9a8ec132438040c32b4ffa6d2b7e571822d20a5a17-merged.mount: Deactivated successfully.
Nov 24 09:27:01 compute-1 podman[79776]: 2025-11-24 09:27:01.602636814 +0000 UTC m=+0.166101748 container remove 5744dab53c48ab3af9b7949ad0ace9e88723809e6401c7cee90eefa6175b131c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_lalande, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 09:27:01 compute-1 systemd[1]: libpod-conmon-5744dab53c48ab3af9b7949ad0ace9e88723809e6401c7cee90eefa6175b131c.scope: Deactivated successfully.
Nov 24 09:27:01 compute-1 podman[79809]: 2025-11-24 09:27:01.659580762 +0000 UTC m=+0.035589041 container create 9e3cdf3f1233b36ce52f369bd04b45144388da47e5f0e6bb8df055e1c77f4b29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True)
Nov 24 09:27:01 compute-1 systemd[1]: Started libpod-conmon-9e3cdf3f1233b36ce52f369bd04b45144388da47e5f0e6bb8df055e1c77f4b29.scope.
Nov 24 09:27:01 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:27:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc85eae6c9c36102543a7b92197d663c38fa07e6cee76e89f26ee58fb1b318f/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc85eae6c9c36102543a7b92197d663c38fa07e6cee76e89f26ee58fb1b318f/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc85eae6c9c36102543a7b92197d663c38fa07e6cee76e89f26ee58fb1b318f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc85eae6c9c36102543a7b92197d663c38fa07e6cee76e89f26ee58fb1b318f/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:01 compute-1 podman[79809]: 2025-11-24 09:27:01.722794499 +0000 UTC m=+0.098802808 container init 9e3cdf3f1233b36ce52f369bd04b45144388da47e5f0e6bb8df055e1c77f4b29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 09:27:01 compute-1 podman[79809]: 2025-11-24 09:27:01.729967876 +0000 UTC m=+0.105976165 container start 9e3cdf3f1233b36ce52f369bd04b45144388da47e5f0e6bb8df055e1c77f4b29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_chaum, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 09:27:01 compute-1 podman[79809]: 2025-11-24 09:27:01.733729363 +0000 UTC m=+0.109737652 container attach 9e3cdf3f1233b36ce52f369bd04b45144388da47e5f0e6bb8df055e1c77f4b29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_chaum, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 24 09:27:01 compute-1 podman[79809]: 2025-11-24 09:27:01.644146846 +0000 UTC m=+0.020155165 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:27:01 compute-1 systemd[1]: libpod-9e3cdf3f1233b36ce52f369bd04b45144388da47e5f0e6bb8df055e1c77f4b29.scope: Deactivated successfully.
Nov 24 09:27:01 compute-1 podman[79809]: 2025-11-24 09:27:01.823332521 +0000 UTC m=+0.199340810 container died 9e3cdf3f1233b36ce52f369bd04b45144388da47e5f0e6bb8df055e1c77f4b29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_chaum, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:01 compute-1 systemd[1]: var-lib-containers-storage-overlay-2cc85eae6c9c36102543a7b92197d663c38fa07e6cee76e89f26ee58fb1b318f-merged.mount: Deactivated successfully.
Nov 24 09:27:01 compute-1 podman[79809]: 2025-11-24 09:27:01.855617053 +0000 UTC m=+0.231625342 container remove 9e3cdf3f1233b36ce52f369bd04b45144388da47e5f0e6bb8df055e1c77f4b29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_chaum, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 24 09:27:01 compute-1 systemd[1]: libpod-conmon-9e3cdf3f1233b36ce52f369bd04b45144388da47e5f0e6bb8df055e1c77f4b29.scope: Deactivated successfully.
Nov 24 09:27:01 compute-1 systemd[1]: Reloading.
Nov 24 09:27:01 compute-1 systemd-sysv-generator[79894]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:27:01 compute-1 systemd-rc-local-generator[79891]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:27:02 compute-1 systemd[1]: Reloading.
Nov 24 09:27:02 compute-1 systemd-rc-local-generator[79930]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:27:02 compute-1 systemd-sysv-generator[79934]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:27:02 compute-1 systemd[1]: Starting Ceph mon.compute-1 for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:27:02 compute-1 podman[79989]: 2025-11-24 09:27:02.653197671 +0000 UTC m=+0.040442061 container create 515e62465fc9d8059b784860ec72fb021260af5e219f16728bf6dad72e3c38a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:27:02 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edee9d0e12a37ecdff4e90bc62e2300ec56e12e4365cee9a2158103da2588584/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:02 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edee9d0e12a37ecdff4e90bc62e2300ec56e12e4365cee9a2158103da2588584/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:02 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edee9d0e12a37ecdff4e90bc62e2300ec56e12e4365cee9a2158103da2588584/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:02 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edee9d0e12a37ecdff4e90bc62e2300ec56e12e4365cee9a2158103da2588584/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:02 compute-1 podman[79989]: 2025-11-24 09:27:02.703677816 +0000 UTC m=+0.090922206 container init 515e62465fc9d8059b784860ec72fb021260af5e219f16728bf6dad72e3c38a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:02 compute-1 podman[79989]: 2025-11-24 09:27:02.713823005 +0000 UTC m=+0.101067385 container start 515e62465fc9d8059b784860ec72fb021260af5e219f16728bf6dad72e3c38a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:27:02 compute-1 bash[79989]: 515e62465fc9d8059b784860ec72fb021260af5e219f16728bf6dad72e3c38a3
Nov 24 09:27:02 compute-1 podman[79989]: 2025-11-24 09:27:02.633552128 +0000 UTC m=+0.020796548 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:27:02 compute-1 systemd[1]: Started Ceph mon.compute-1 for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:27:02 compute-1 ceph-mon[80009]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 09:27:02 compute-1 ceph-mon[80009]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Nov 24 09:27:02 compute-1 ceph-mon[80009]: pidfile_write: ignore empty --pid-file
Nov 24 09:27:02 compute-1 ceph-mon[80009]: load: jerasure load: lrc 
Nov 24 09:27:02 compute-1 sudo[79708]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: RocksDB version: 7.9.2
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: Git sha 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: DB SUMMARY
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: DB Session ID:  IKBI0BILOO7CZC90TSBP
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: CURRENT file:  CURRENT
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-1/store.db dir, Total Num: 0, files: 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-1/store.db: 000004.log size: 511 ; 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                         Options.error_if_exists: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                       Options.create_if_missing: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                                     Options.env: 0x55a5fdc67c20
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                                Options.info_log: 0x55a5fe7d1a20
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                              Options.statistics: (nil)
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                               Options.use_fsync: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                              Options.db_log_dir: 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                                 Options.wal_dir: 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                    Options.write_buffer_manager: 0x55a5fe7d5900
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                  Options.unordered_write: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                               Options.row_cache: None
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                              Options.wal_filter: None
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:             Options.two_write_queues: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:             Options.wal_compression: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:             Options.atomic_flush: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:             Options.max_background_jobs: 2
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:             Options.max_background_compactions: -1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:             Options.max_subcompactions: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:             Options.max_total_wal_size: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                          Options.max_open_files: -1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:       Options.compaction_readahead_size: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: Compression algorithms supported:
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:         kZSTD supported: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:         kXpressCompression supported: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:         kBZip2Compression supported: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:         kLZ4Compression supported: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:         kZlibCompression supported: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:         kLZ4HCCompression supported: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:         kSnappyCompression supported: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:           Options.merge_operator: 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:        Options.compaction_filter: None
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a5fe7d05c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a5fe7f5350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:        Options.write_buffer_size: 33554432
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:  Options.max_write_buffer_number: 2
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:          Options.compression: NoCompression
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:             Options.num_levels: 7
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 299c38d0-06ca-4074-b462-97cee3c14bc3
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976422763515, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976422765338, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976422765506, "job": 1, "event": "recovery_finished"}
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a5fe7f6e00
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: DB pointer 0x55a5fe900000
Nov 24 09:27:02 compute-1 ceph-mon[80009]: mon.compute-1 does not exist in monmap, will attempt to join an existing cluster
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 09:27:02 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a5fe7f5350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 09:27:02 compute-1 ceph-mon[80009]: using public_addr v2:192.168.122.101:0/0 -> [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0]
Nov 24 09:27:02 compute-1 ceph-mon[80009]: starting mon.compute-1 rank -1 at public addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] at bind addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-1 fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:02 compute-1 ceph-mon[80009]: mon.compute-1@-1(???) e0 preinit fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:02 compute-1 ceph-mon[80009]: mon.compute-1@-1(synchronizing).mds e1 new map
Nov 24 09:27:02 compute-1 ceph-mon[80009]: mon.compute-1@-1(synchronizing).mds e1 print_map
                                           e1
                                           btime 2025-11-24T09:25:05:540478+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 24 09:27:02 compute-1 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 24 09:27:02 compute-1 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Nov 24 09:27:02 compute-1 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Nov 24 09:27:02 compute-1 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Nov 24 09:27:02 compute-1 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Nov 24 09:27:02 compute-1 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Nov 24 09:27:02 compute-1 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Nov 24 09:27:02 compute-1 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Nov 24 09:27:02 compute-1 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e8 e8: 2 total, 2 up, 2 in
Nov 24 09:27:02 compute-1 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e9 e9: 2 total, 2 up, 2 in
Nov 24 09:27:02 compute-1 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e10 e10: 2 total, 2 up, 2 in
Nov 24 09:27:02 compute-1 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e11 e11: 2 total, 2 up, 2 in
Nov 24 09:27:02 compute-1 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e12 e12: 2 total, 2 up, 2 in
Nov 24 09:27:02 compute-1 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e12 crush map has features 3314933000852226048, adjusting msgr requires
Nov 24 09:27:02 compute-1 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Nov 24 09:27:02 compute-1 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Nov 24 09:27:02 compute-1 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Nov 24 09:27:02 compute-1 ceph-mon[80009]: Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 24 09:27:02 compute-1 ceph-mon[80009]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:27:02 compute-1 ceph-mon[80009]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 24 09:27:02 compute-1 ceph-mon[80009]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: Deploying daemon crash.compute-1 on compute-1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Nov 24 09:27:02 compute-1 ceph-mon[80009]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3344904896' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c"}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3344904896' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c"}]': finished
Nov 24 09:27:02 compute-1 ceph-mon[80009]: osdmap e4: 1 total, 0 up, 1 in
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3818245863' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d66edcc6-663b-43db-9331-33ccbb320884"}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3818245863' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d66edcc6-663b-43db-9331-33ccbb320884"}]': finished
Nov 24 09:27:02 compute-1 ceph-mon[80009]: osdmap e5: 2 total, 0 up, 2 in
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1822629335' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3279587715' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:27:02 compute-1 ceph-mon[80009]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 24 09:27:02 compute-1 ceph-mon[80009]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: Deploying daemon osd.1 on compute-1
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: Deploying daemon osd.0 on compute-0
Nov 24 09:27:02 compute-1 ceph-mon[80009]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:27:02 compute-1 ceph-mon[80009]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2864336643' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='osd.1 [v2:192.168.122.101:6800/2493412744,v1:192.168.122.101:6801/2493412744]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='osd.0 [v2:192.168.122.100:6802/1187333864,v1:192.168.122.100:6803/1187333864]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='osd.1 [v2:192.168.122.101:6800/2493412744,v1:192.168.122.101:6801/2493412744]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='osd.0 [v2:192.168.122.100:6802/1187333864,v1:192.168.122.100:6803/1187333864]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 24 09:27:02 compute-1 ceph-mon[80009]: osdmap e6: 2 total, 0 up, 2 in
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='osd.0 [v2:192.168.122.100:6802/1187333864,v1:192.168.122.100:6803/1187333864]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='osd.1 [v2:192.168.122.101:6800/2493412744,v1:192.168.122.101:6801/2493412744]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='osd.0 [v2:192.168.122.100:6802/1187333864,v1:192.168.122.100:6803/1187333864]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='osd.1 [v2:192.168.122.101:6800/2493412744,v1:192.168.122.101:6801/2493412744]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Nov 24 09:27:02 compute-1 ceph-mon[80009]: osdmap e7: 2 total, 0 up, 2 in
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: purged_snaps scrub starts
Nov 24 09:27:02 compute-1 ceph-mon[80009]: purged_snaps scrub ok
Nov 24 09:27:02 compute-1 ceph-mon[80009]: purged_snaps scrub starts
Nov 24 09:27:02 compute-1 ceph-mon[80009]: purged_snaps scrub ok
Nov 24 09:27:02 compute-1 ceph-mon[80009]: Adjusting osd_memory_target on compute-0 to 128.0M
Nov 24 09:27:02 compute-1 ceph-mon[80009]: Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 24 09:27:02 compute-1 ceph-mon[80009]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: Adjusting osd_memory_target on compute-1 to  5248M
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: OSD bench result of 6266.692144 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 09:27:02 compute-1 ceph-mon[80009]: pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:27:02 compute-1 ceph-mon[80009]: OSD bench result of 10987.994700 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: osd.0 [v2:192.168.122.100:6802/1187333864,v1:192.168.122.100:6803/1187333864] boot
Nov 24 09:27:02 compute-1 ceph-mon[80009]: osd.1 [v2:192.168.122.101:6800/2493412744,v1:192.168.122.101:6801/2493412744] boot
Nov 24 09:27:02 compute-1 ceph-mon[80009]: osdmap e8: 2 total, 2 up, 2 in
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: osdmap e9: 2 total, 2 up, 2 in
Nov 24 09:27:02 compute-1 ceph-mon[80009]: pgmap v39: 0 pgs: ; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 24 09:27:02 compute-1 ceph-mon[80009]: osdmap e10: 2 total, 2 up, 2 in
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 24 09:27:02 compute-1 ceph-mon[80009]: osdmap e11: 2 total, 2 up, 2 in
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: pgmap v42: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 24 09:27:02 compute-1 ceph-mon[80009]: osdmap e12: 2 total, 2 up, 2 in
Nov 24 09:27:02 compute-1 ceph-mon[80009]: mgrmap e9: compute-0.mauvni(active, since 78s)
Nov 24 09:27:02 compute-1 ceph-mon[80009]: pgmap v44: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 24 09:27:02 compute-1 ceph-mon[80009]: pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:02 compute-1 ceph-mon[80009]: pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:02 compute-1 ceph-mon[80009]: pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:02 compute-1 ceph-mon[80009]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:02 compute-1 ceph-mon[80009]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: Updating compute-2:/etc/ceph/ceph.conf
Nov 24 09:27:02 compute-1 ceph-mon[80009]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:02 compute-1 ceph-mon[80009]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:27:02 compute-1 ceph-mon[80009]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:02 compute-1 ceph-mon[80009]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:27:02 compute-1 ceph-mon[80009]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:02 compute-1 ceph-mon[80009]: Deploying daemon mon.compute-2 on compute-2
Nov 24 09:27:02 compute-1 ceph-mon[80009]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Nov 24 09:27:02 compute-1 ceph-mon[80009]: Cluster is now healthy
Nov 24 09:27:02 compute-1 ceph-mon[80009]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:02 compute-1 ceph-mon[80009]: mon.compute-1@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Nov 24 09:27:08 compute-1 ceph-mon[80009]: mon.compute-1@-1(probing) e3  my rank is now 2 (was -1)
Nov 24 09:27:08 compute-1 ceph-mon[80009]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Nov 24 09:27:08 compute-1 ceph-mon[80009]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 24 09:27:08 compute-1 ceph-mon[80009]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 09:27:11 compute-1 ceph-mon[80009]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 09:27:11 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 24 09:27:11 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Nov 24 09:27:11 compute-1 ceph-mon[80009]: Deploying daemon mon.compute-1 on compute-1
Nov 24 09:27:11 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 09:27:11 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:27:11 compute-1 ceph-mon[80009]: mon.compute-0 calling monitor election
Nov 24 09:27:11 compute-1 ceph-mon[80009]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:11 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:27:11 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:11 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:27:11 compute-1 ceph-mon[80009]: mon.compute-2 calling monitor election
Nov 24 09:27:11 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:11 compute-1 ceph-mon[80009]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:11 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:27:11 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:11 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:27:11 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:11 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:27:11 compute-1 ceph-mon[80009]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 24 09:27:11 compute-1 ceph-mon[80009]: monmap epoch 2
Nov 24 09:27:11 compute-1 ceph-mon[80009]: fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:11 compute-1 ceph-mon[80009]: last_changed 2025-11-24T09:27:00.955946+0000
Nov 24 09:27:11 compute-1 ceph-mon[80009]: created 2025-11-24T09:25:03.414609+0000
Nov 24 09:27:11 compute-1 ceph-mon[80009]: min_mon_release 19 (squid)
Nov 24 09:27:11 compute-1 ceph-mon[80009]: election_strategy: 1
Nov 24 09:27:11 compute-1 ceph-mon[80009]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 24 09:27:11 compute-1 ceph-mon[80009]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Nov 24 09:27:11 compute-1 ceph-mon[80009]: fsmap 
Nov 24 09:27:11 compute-1 ceph-mon[80009]: osdmap e12: 2 total, 2 up, 2 in
Nov 24 09:27:11 compute-1 ceph-mon[80009]: mgrmap e9: compute-0.mauvni(active, since 100s)
Nov 24 09:27:11 compute-1 ceph-mon[80009]: overall HEALTH_OK
Nov 24 09:27:11 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:11 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:11 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:11 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:11 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.rzcnzg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 09:27:11 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.rzcnzg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 24 09:27:11 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 09:27:11 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:11 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 09:27:11 compute-1 ceph-mon[80009]: mgrc update_daemon_metadata mon.compute-1 metadata {addrs=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-1,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Nov 24 09:27:12 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 09:27:12 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:12 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:27:12 compute-1 ceph-mon[80009]: mon.compute-0 calling monitor election
Nov 24 09:27:12 compute-1 ceph-mon[80009]: mon.compute-2 calling monitor election
Nov 24 09:27:12 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:12 compute-1 ceph-mon[80009]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:12 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:12 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:12 compute-1 ceph-mon[80009]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:12 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:12 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:12 compute-1 ceph-mon[80009]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 24 09:27:12 compute-1 ceph-mon[80009]: monmap epoch 3
Nov 24 09:27:12 compute-1 ceph-mon[80009]: fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:12 compute-1 ceph-mon[80009]: last_changed 2025-11-24T09:27:06.832853+0000
Nov 24 09:27:12 compute-1 ceph-mon[80009]: created 2025-11-24T09:25:03.414609+0000
Nov 24 09:27:12 compute-1 ceph-mon[80009]: min_mon_release 19 (squid)
Nov 24 09:27:12 compute-1 ceph-mon[80009]: election_strategy: 1
Nov 24 09:27:12 compute-1 ceph-mon[80009]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 24 09:27:12 compute-1 ceph-mon[80009]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Nov 24 09:27:12 compute-1 ceph-mon[80009]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Nov 24 09:27:12 compute-1 ceph-mon[80009]: fsmap 
Nov 24 09:27:12 compute-1 ceph-mon[80009]: osdmap e12: 2 total, 2 up, 2 in
Nov 24 09:27:12 compute-1 ceph-mon[80009]: mgrmap e9: compute-0.mauvni(active, since 106s)
Nov 24 09:27:12 compute-1 ceph-mon[80009]: overall HEALTH_OK
Nov 24 09:27:12 compute-1 sudo[80048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:27:12 compute-1 sudo[80048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:12 compute-1 sudo[80048]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:12 compute-1 sudo[80073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:12 compute-1 sudo[80073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_auth_request failed to assign global_id
Nov 24 09:27:12 compute-1 podman[80139]: 2025-11-24 09:27:12.528376104 +0000 UTC m=+0.038313897 container create 010ed44705477b83144efd2e84cc3f71cd000fb5f5367095e49eee536da639aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_kowalevski, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:27:12 compute-1 systemd[1]: Started libpod-conmon-010ed44705477b83144efd2e84cc3f71cd000fb5f5367095e49eee536da639aa.scope.
Nov 24 09:27:12 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:27:12 compute-1 podman[80139]: 2025-11-24 09:27:12.512006609 +0000 UTC m=+0.021944432 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:27:12 compute-1 podman[80139]: 2025-11-24 09:27:12.612658613 +0000 UTC m=+0.122596426 container init 010ed44705477b83144efd2e84cc3f71cd000fb5f5367095e49eee536da639aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:27:12 compute-1 podman[80139]: 2025-11-24 09:27:12.620453612 +0000 UTC m=+0.130391405 container start 010ed44705477b83144efd2e84cc3f71cd000fb5f5367095e49eee536da639aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 24 09:27:12 compute-1 podman[80139]: 2025-11-24 09:27:12.62423625 +0000 UTC m=+0.134174063 container attach 010ed44705477b83144efd2e84cc3f71cd000fb5f5367095e49eee536da639aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_kowalevski, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 24 09:27:12 compute-1 jolly_kowalevski[80155]: 167 167
Nov 24 09:27:12 compute-1 systemd[1]: libpod-010ed44705477b83144efd2e84cc3f71cd000fb5f5367095e49eee536da639aa.scope: Deactivated successfully.
Nov 24 09:27:12 compute-1 conmon[80155]: conmon 010ed44705477b83144e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-010ed44705477b83144efd2e84cc3f71cd000fb5f5367095e49eee536da639aa.scope/container/memory.events
Nov 24 09:27:12 compute-1 podman[80139]: 2025-11-24 09:27:12.62812406 +0000 UTC m=+0.138061853 container died 010ed44705477b83144efd2e84cc3f71cd000fb5f5367095e49eee536da639aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_kowalevski, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 24 09:27:12 compute-1 systemd[1]: var-lib-containers-storage-overlay-8cc21078caa96df806b77e00d44a024a512f2dd237cbc6376b65da37bc05b102-merged.mount: Deactivated successfully.
Nov 24 09:27:12 compute-1 podman[80139]: 2025-11-24 09:27:12.660068425 +0000 UTC m=+0.170006218 container remove 010ed44705477b83144efd2e84cc3f71cd000fb5f5367095e49eee536da639aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:27:12 compute-1 systemd[1]: libpod-conmon-010ed44705477b83144efd2e84cc3f71cd000fb5f5367095e49eee536da639aa.scope: Deactivated successfully.
Nov 24 09:27:12 compute-1 systemd[1]: Reloading.
Nov 24 09:27:12 compute-1 systemd-rc-local-generator[80199]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:27:12 compute-1 systemd-sysv-generator[80202]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:27:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e12 _set_new_cache_sizes cache_size:1019937309 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:27:12 compute-1 systemd[1]: Reloading.
Nov 24 09:27:13 compute-1 ceph-mon[80009]: mon.compute-1 calling monitor election
Nov 24 09:27:13 compute-1 ceph-mon[80009]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:13 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:13 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:13 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:13 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:13 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.qelqsg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 09:27:13 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.qelqsg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 24 09:27:13 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 09:27:13 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:13 compute-1 ceph-mon[80009]: Deploying daemon mgr.compute-1.qelqsg on compute-1
Nov 24 09:27:13 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1623978198' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 09:27:13 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:13 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e13 e13: 2 total, 2 up, 2 in
Nov 24 09:27:13 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 13 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:13 compute-1 systemd-sysv-generator[80246]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:27:13 compute-1 systemd-rc-local-generator[80242]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:27:13 compute-1 systemd[1]: Starting Ceph mgr.compute-1.qelqsg for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:27:13 compute-1 podman[80296]: 2025-11-24 09:27:13.451796484 +0000 UTC m=+0.036666603 container create 060bf9a9568d5a1517ba559812a28ddc141c219d3df64d1d6697255909caeb65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:27:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb76d903953ac3e50f42bf090ea3925d2c405d2f3059059f14bd330c2eb4597/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb76d903953ac3e50f42bf090ea3925d2c405d2f3059059f14bd330c2eb4597/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb76d903953ac3e50f42bf090ea3925d2c405d2f3059059f14bd330c2eb4597/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb76d903953ac3e50f42bf090ea3925d2c405d2f3059059f14bd330c2eb4597/merged/var/lib/ceph/mgr/ceph-compute-1.qelqsg supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:13 compute-1 podman[80296]: 2025-11-24 09:27:13.508463335 +0000 UTC m=+0.093333484 container init 060bf9a9568d5a1517ba559812a28ddc141c219d3df64d1d6697255909caeb65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 09:27:13 compute-1 podman[80296]: 2025-11-24 09:27:13.516694004 +0000 UTC m=+0.101564123 container start 060bf9a9568d5a1517ba559812a28ddc141c219d3df64d1d6697255909caeb65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:27:13 compute-1 bash[80296]: 060bf9a9568d5a1517ba559812a28ddc141c219d3df64d1d6697255909caeb65
Nov 24 09:27:13 compute-1 podman[80296]: 2025-11-24 09:27:13.436061521 +0000 UTC m=+0.020931640 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:27:13 compute-1 systemd[1]: Started Ceph mgr.compute-1.qelqsg for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:27:13 compute-1 ceph-mgr[80316]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 09:27:13 compute-1 ceph-mgr[80316]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 24 09:27:13 compute-1 ceph-mgr[80316]: pidfile_write: ignore empty --pid-file
Nov 24 09:27:13 compute-1 sudo[80073]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:13 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'alerts'
Nov 24 09:27:13 compute-1 ceph-mgr[80316]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 09:27:13 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'balancer'
Nov 24 09:27:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:13.676+0000 7f74b3087140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 09:27:13 compute-1 ceph-mgr[80316]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 09:27:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:13.756+0000 7f74b3087140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 09:27:13 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'cephadm'
Nov 24 09:27:14 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1623978198' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 09:27:14 compute-1 ceph-mon[80009]: osdmap e13: 2 total, 2 up, 2 in
Nov 24 09:27:14 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:14 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:14 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:14 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:14 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 09:27:14 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 24 09:27:14 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:14 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2842040450' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 09:27:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e14 e14: 2 total, 2 up, 2 in
Nov 24 09:27:14 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 14 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:14 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'crash'
Nov 24 09:27:14 compute-1 ceph-mgr[80316]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 09:27:14 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:14.567+0000 7f74b3087140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 09:27:14 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'dashboard'
Nov 24 09:27:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e15 e15: 2 total, 2 up, 2 in
Nov 24 09:27:15 compute-1 ceph-mon[80009]: Deploying daemon crash.compute-2 on compute-2
Nov 24 09:27:15 compute-1 ceph-mon[80009]: pgmap v60: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:15 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2842040450' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 09:27:15 compute-1 ceph-mon[80009]: osdmap e14: 2 total, 2 up, 2 in
Nov 24 09:27:15 compute-1 ceph-mon[80009]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 09:27:15 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1077027605' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 09:27:15 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'devicehealth'
Nov 24 09:27:15 compute-1 ceph-mgr[80316]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 09:27:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:15.223+0000 7f74b3087140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 09:27:15 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'diskprediction_local'
Nov 24 09:27:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 24 09:27:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 24 09:27:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]:   from numpy import show_config as show_numpy_config
Nov 24 09:27:15 compute-1 ceph-mgr[80316]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 09:27:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:15.405+0000 7f74b3087140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 09:27:15 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'influx'
Nov 24 09:27:15 compute-1 ceph-mgr[80316]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 09:27:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:15.480+0000 7f74b3087140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 09:27:15 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'insights'
Nov 24 09:27:15 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'iostat'
Nov 24 09:27:15 compute-1 ceph-mgr[80316]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 09:27:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:15.640+0000 7f74b3087140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 09:27:15 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'k8sevents'
Nov 24 09:27:16 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'localpool'
Nov 24 09:27:16 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e16 e16: 2 total, 2 up, 2 in
Nov 24 09:27:16 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1077027605' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 09:27:16 compute-1 ceph-mon[80009]: osdmap e15: 2 total, 2 up, 2 in
Nov 24 09:27:16 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:16 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:16 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:16 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:16 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:27:16 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:27:16 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:16 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:27:16 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:16 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2174323893' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 09:27:16 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2174323893' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 09:27:16 compute-1 ceph-mon[80009]: osdmap e16: 2 total, 2 up, 2 in
Nov 24 09:27:16 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'mds_autoscaler'
Nov 24 09:27:16 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'mirroring'
Nov 24 09:27:16 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'nfs'
Nov 24 09:27:16 compute-1 ceph-mgr[80316]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 09:27:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:16.713+0000 7f74b3087140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 09:27:16 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'orchestrator'
Nov 24 09:27:16 compute-1 ceph-mgr[80316]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 09:27:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:16.955+0000 7f74b3087140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 09:27:16 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'osd_perf_query'
Nov 24 09:27:17 compute-1 ceph-mgr[80316]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 09:27:17 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'osd_support'
Nov 24 09:27:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:17.034+0000 7f74b3087140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 09:27:17 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e17 e17: 2 total, 2 up, 2 in
Nov 24 09:27:17 compute-1 ceph-mgr[80316]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 09:27:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:17.108+0000 7f74b3087140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 09:27:17 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'pg_autoscaler'
Nov 24 09:27:17 compute-1 ceph-mon[80009]: pgmap v63: 4 pgs: 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:17 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/499996439' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 09:27:17 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:17 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/499996439' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 09:27:17 compute-1 ceph-mon[80009]: osdmap e17: 2 total, 2 up, 2 in
Nov 24 09:27:17 compute-1 ceph-mgr[80316]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 09:27:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:17.194+0000 7f74b3087140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 09:27:17 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'progress'
Nov 24 09:27:17 compute-1 ceph-mgr[80316]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 09:27:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:17.271+0000 7f74b3087140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 09:27:17 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'prometheus'
Nov 24 09:27:17 compute-1 ceph-mgr[80316]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 09:27:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:17.643+0000 7f74b3087140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 09:27:17 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'rbd_support'
Nov 24 09:27:17 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e18 e18: 3 total, 2 up, 3 in
Nov 24 09:27:17 compute-1 ceph-mgr[80316]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 09:27:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:17.748+0000 7f74b3087140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 09:27:17 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'restful'
Nov 24 09:27:17 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e18 _set_new_cache_sizes cache_size:1020053330 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:27:17 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'rgw'
Nov 24 09:27:18 compute-1 ceph-mon[80009]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8adc21f3-187b-4333-b4ae-3cc82866c3f9"}]: dispatch
Nov 24 09:27:18 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1514770584' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8adc21f3-187b-4333-b4ae-3cc82866c3f9"}]: dispatch
Nov 24 09:27:18 compute-1 ceph-mon[80009]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8adc21f3-187b-4333-b4ae-3cc82866c3f9"}]': finished
Nov 24 09:27:18 compute-1 ceph-mon[80009]: osdmap e18: 3 total, 2 up, 3 in
Nov 24 09:27:18 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:18 compute-1 ceph-mon[80009]: pgmap v67: 6 pgs: 2 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:18 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2555317958' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 09:27:18 compute-1 ceph-mgr[80316]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 09:27:18 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'rook'
Nov 24 09:27:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:18.206+0000 7f74b3087140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 09:27:18 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e19 e19: 3 total, 2 up, 3 in
Nov 24 09:27:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 19 pg[7.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:18 compute-1 ceph-mgr[80316]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 09:27:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:18.794+0000 7f74b3087140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 09:27:18 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'selftest'
Nov 24 09:27:18 compute-1 ceph-mgr[80316]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 09:27:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:18.865+0000 7f74b3087140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 09:27:18 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'snap_schedule'
Nov 24 09:27:18 compute-1 ceph-mgr[80316]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 09:27:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:18.944+0000 7f74b3087140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 09:27:18 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'stats'
Nov 24 09:27:19 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'status'
Nov 24 09:27:19 compute-1 ceph-mgr[80316]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 09:27:19 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'telegraf'
Nov 24 09:27:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:19.096+0000 7f74b3087140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 09:27:19 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3983475032' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 24 09:27:19 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2555317958' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 09:27:19 compute-1 ceph-mon[80009]: osdmap e19: 3 total, 2 up, 3 in
Nov 24 09:27:19 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:19 compute-1 ceph-mon[80009]: Standby manager daemon compute-2.rzcnzg started
Nov 24 09:27:19 compute-1 ceph-mgr[80316]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 09:27:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:19.172+0000 7f74b3087140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 09:27:19 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'telemetry'
Nov 24 09:27:19 compute-1 ceph-mgr[80316]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 09:27:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:19.340+0000 7f74b3087140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 09:27:19 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'test_orchestrator'
Nov 24 09:27:19 compute-1 ceph-mgr[80316]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 09:27:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:19.578+0000 7f74b3087140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 09:27:19 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'volumes'
Nov 24 09:27:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e20 e20: 3 total, 2 up, 3 in
Nov 24 09:27:19 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 20 pg[7.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:19 compute-1 ceph-mgr[80316]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 09:27:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:19.872+0000 7f74b3087140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 09:27:19 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'zabbix'
Nov 24 09:27:19 compute-1 ceph-mgr[80316]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 09:27:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:19.944+0000 7f74b3087140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 09:27:19 compute-1 ceph-mgr[80316]: ms_deliver_dispatch: unhandled message 0x5637c5566d00 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Nov 24 09:27:20 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2927635265' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 24 09:27:20 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2927635265' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 24 09:27:20 compute-1 ceph-mon[80009]: osdmap e20: 3 total, 2 up, 3 in
Nov 24 09:27:20 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:20 compute-1 ceph-mon[80009]: mgrmap e10: compute-0.mauvni(active, since 114s), standbys: compute-2.rzcnzg
Nov 24 09:27:20 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-2.rzcnzg", "id": "compute-2.rzcnzg"}]: dispatch
Nov 24 09:27:20 compute-1 ceph-mon[80009]: pgmap v70: 7 pgs: 3 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:20 compute-1 ceph-mon[80009]: Standby manager daemon compute-1.qelqsg started
Nov 24 09:27:20 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:20 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:20 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e21 e21: 3 total, 2 up, 3 in
Nov 24 09:27:21 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2444820917' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 24 09:27:21 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2444820917' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 24 09:27:21 compute-1 ceph-mon[80009]: osdmap e21: 3 total, 2 up, 3 in
Nov 24 09:27:21 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:21 compute-1 ceph-mon[80009]: mgrmap e11: compute-0.mauvni(active, since 115s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:27:21 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-1.qelqsg", "id": "compute-1.qelqsg"}]: dispatch
Nov 24 09:27:22 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3988938670' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 24 09:27:22 compute-1 ceph-mon[80009]: pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:22 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e22 e22: 3 total, 2 up, 3 in
Nov 24 09:27:22 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e22 _set_new_cache_sizes cache_size:1020054713 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:27:23 compute-1 ceph-mon[80009]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 09:27:23 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3988938670' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 24 09:27:23 compute-1 ceph-mon[80009]: osdmap e22: 3 total, 2 up, 3 in
Nov 24 09:27:23 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:23 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3230617921' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 24 09:27:23 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e23 e23: 3 total, 2 up, 3 in
Nov 24 09:27:24 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3230617921' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 24 09:27:24 compute-1 ceph-mon[80009]: osdmap e23: 3 total, 2 up, 3 in
Nov 24 09:27:24 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:24 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 24 09:27:24 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:24 compute-1 ceph-mon[80009]: Deploying daemon osd.2 on compute-2
Nov 24 09:27:24 compute-1 ceph-mon[80009]: pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:24 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2984871477' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 24 09:27:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e24 e24: 3 total, 2 up, 3 in
Nov 24 09:27:25 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2984871477' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 24 09:27:25 compute-1 ceph-mon[80009]: osdmap e24: 3 total, 2 up, 3 in
Nov 24 09:27:25 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:25 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2208436282' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 24 09:27:25 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e25 e25: 3 total, 2 up, 3 in
Nov 24 09:27:26 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e26 e26: 3 total, 2 up, 3 in
Nov 24 09:27:26 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2208436282' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 24 09:27:26 compute-1 ceph-mon[80009]: osdmap e25: 3 total, 2 up, 3 in
Nov 24 09:27:26 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:26 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:27:26 compute-1 ceph-mon[80009]: pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:27 compute-1 ceph-mon[80009]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Nov 24 09:27:27 compute-1 ceph-mon[80009]: Cluster is now healthy
Nov 24 09:27:27 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:27:27 compute-1 ceph-mon[80009]: osdmap e26: 3 total, 2 up, 3 in
Nov 24 09:27:27 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:27 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:27:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e27 e27: 3 total, 2 up, 3 in
Nov 24 09:27:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:27:28 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:27:28 compute-1 ceph-mon[80009]: osdmap e27: 3 total, 2 up, 3 in
Nov 24 09:27:28 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:28 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:27:28 compute-1 ceph-mon[80009]: pgmap v81: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:28 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:28 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:28 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:28 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:28 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e28 e28: 3 total, 2 up, 3 in
Nov 24 09:27:28 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 28 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=28 pruub=9.575661659s) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active pruub 64.608634949s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:28 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 28 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=28 pruub=9.575661659s) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown pruub 64.608634949s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:27:29 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:27:29 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:27:29 compute-1 ceph-mon[80009]: osdmap e28: 3 total, 2 up, 3 in
Nov 24 09:27:29 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:29 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:27:29 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2584835535' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 24 09:27:29 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2584835535' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 24 09:27:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e29 e29: 3 total, 2 up, 3 in
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1f( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1d( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1c( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1e( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.a( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.9( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.8( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.7( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.6( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.4( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.2( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.5( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.3( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.b( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.c( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1b( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.e( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.f( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.d( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.10( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.11( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.13( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.14( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.15( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.16( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.17( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.18( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.19( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1a( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.12( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1c( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1f( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.a( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.8( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1d( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1e( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.7( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.9( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.6( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.4( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.5( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.2( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.0( empty local-lis/les=28/29 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.b( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.3( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.c( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.e( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.d( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.f( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.10( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1b( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.11( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.13( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.14( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.15( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.16( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.17( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.18( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.19( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1a( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.12( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:30 compute-1 sudo[80348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:27:30 compute-1 sudo[80348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:30 compute-1 sudo[80348]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:30 compute-1 sudo[80373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:27:30 compute-1 sudo[80373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:30 compute-1 sudo[80373]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:30 compute-1 sudo[80398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 24 09:27:30 compute-1 sudo[80398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:30 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 24 09:27:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e30 e30: 3 total, 2 up, 3 in
Nov 24 09:27:30 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 24 09:27:30 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:27:30 compute-1 ceph-mon[80009]: osdmap e29: 3 total, 2 up, 3 in
Nov 24 09:27:30 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:30 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:27:30 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3323422740' entity='client.admin' 
Nov 24 09:27:30 compute-1 ceph-mon[80009]: 3.1b scrub starts
Nov 24 09:27:30 compute-1 ceph-mon[80009]: 3.1b scrub ok
Nov 24 09:27:30 compute-1 ceph-mon[80009]: pgmap v84: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:30 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:30 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:30 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:30 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:30 compute-1 ceph-mon[80009]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 24 09:27:30 compute-1 ceph-mon[80009]: from='osd.2 [v2:192.168.122.102:6800/4204763159,v1:192.168.122.102:6801/4204763159]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 24 09:27:30 compute-1 podman[80494]: 2025-11-24 09:27:30.939552303 +0000 UTC m=+0.052583730 container exec fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 24 09:27:31 compute-1 podman[80494]: 2025-11-24 09:27:31.034805037 +0000 UTC m=+0.147836484 container exec_died fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid)
Nov 24 09:27:31 compute-1 sudo[80398]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:31 compute-1 sudo[80582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:27:31 compute-1 sudo[80582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:31 compute-1 sudo[80582]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:31 compute-1 sudo[80607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:27:31 compute-1 sudo[80607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:31 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e31 e31: 3 total, 2 up, 3 in
Nov 24 09:27:31 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 24 09:27:31 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 24 09:27:31 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:27:31 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:27:31 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:27:31 compute-1 ceph-mon[80009]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 24 09:27:31 compute-1 ceph-mon[80009]: 2.1f scrub starts
Nov 24 09:27:31 compute-1 ceph-mon[80009]: osdmap e30: 3 total, 2 up, 3 in
Nov 24 09:27:31 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:31 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:27:31 compute-1 ceph-mon[80009]: 2.1f scrub ok
Nov 24 09:27:31 compute-1 ceph-mon[80009]: from='osd.2 [v2:192.168.122.102:6800/4204763159,v1:192.168.122.102:6801/4204763159]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 24 09:27:31 compute-1 ceph-mon[80009]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 24 09:27:31 compute-1 ceph-mon[80009]: from='client.14298 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:27:31 compute-1 ceph-mon[80009]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 24 09:27:31 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:31 compute-1 ceph-mon[80009]: Saving service ingress.rgw.default spec with placement count:2
Nov 24 09:27:31 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:31 compute-1 ceph-mon[80009]: 3.1c scrub starts
Nov 24 09:27:31 compute-1 ceph-mon[80009]: 3.1c scrub ok
Nov 24 09:27:31 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:31 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:31 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:27:31 compute-1 ceph-mon[80009]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Nov 24 09:27:31 compute-1 ceph-mon[80009]: osdmap e31: 3 total, 2 up, 3 in
Nov 24 09:27:31 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:31 compute-1 sudo[80607]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:32 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.a deep-scrub starts
Nov 24 09:27:32 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.a deep-scrub ok
Nov 24 09:27:32 compute-1 ceph-mon[80009]: 2.1c scrub starts
Nov 24 09:27:32 compute-1 ceph-mon[80009]: 2.1c scrub ok
Nov 24 09:27:32 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:32 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:32 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:32 compute-1 ceph-mon[80009]: 4.1e deep-scrub starts
Nov 24 09:27:32 compute-1 ceph-mon[80009]: 4.1e deep-scrub ok
Nov 24 09:27:32 compute-1 ceph-mon[80009]: pgmap v87: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:32 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:32 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:32 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:32 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e32 e32: 3 total, 2 up, 3 in
Nov 24 09:27:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:27:33 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 24 09:27:33 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 24 09:27:33 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e33 e33: 3 total, 2 up, 3 in
Nov 24 09:27:33 compute-1 ceph-mon[80009]: purged_snaps scrub starts
Nov 24 09:27:33 compute-1 ceph-mon[80009]: purged_snaps scrub ok
Nov 24 09:27:33 compute-1 ceph-mon[80009]: 2.a deep-scrub starts
Nov 24 09:27:33 compute-1 ceph-mon[80009]: 2.a deep-scrub ok
Nov 24 09:27:33 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:27:33 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:27:33 compute-1 ceph-mon[80009]: osdmap e32: 3 total, 2 up, 3 in
Nov 24 09:27:33 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:33 compute-1 ceph-mon[80009]: 4.1f scrub starts
Nov 24 09:27:33 compute-1 ceph-mon[80009]: 4.1f scrub ok
Nov 24 09:27:33 compute-1 ceph-mon[80009]: from='client.24131 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:27:33 compute-1 ceph-mon[80009]: Saving service node-exporter spec with placement *
Nov 24 09:27:33 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:33 compute-1 ceph-mon[80009]: Saving service grafana spec with placement compute-0;count:1
Nov 24 09:27:33 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:33 compute-1 ceph-mon[80009]: Saving service prometheus spec with placement compute-0;count:1
Nov 24 09:27:33 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:33 compute-1 ceph-mon[80009]: Saving service alertmanager spec with placement compute-0;count:1
Nov 24 09:27:33 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:33 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:34 compute-1 sudo[80664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 24 09:27:34 compute-1 sudo[80664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-1 sudo[80664]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-1 sudo[80689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph
Nov 24 09:27:34 compute-1 sudo[80689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-1 sudo[80689]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-1 sudo[80714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:27:34 compute-1 sudo[80714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-1 sudo[80714]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-1 sudo[80739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:34 compute-1 sudo[80739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-1 sudo[80739]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-1 sudo[80764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:27:34 compute-1 sudo[80764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-1 sudo[80764]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-1 sudo[80812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:27:34 compute-1 sudo[80812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-1 sudo[80812]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 32 pg[7.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=32 pruub=9.349663734s) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active pruub 70.276336670s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=32 pruub=9.349663734s) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown pruub 70.276336670s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.19( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.1a( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.f( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.10( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.1b( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.1c( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.1d( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.1e( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.1( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.2( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.1f( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.5( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.6( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.3( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.4( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.7( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.8( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.9( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.a( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.b( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.c( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.d( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.e( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.13( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.14( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.11( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.17( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.12( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.18( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.16( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.15( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:34 compute-1 sudo[80837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:27:34 compute-1 sudo[80837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-1 sudo[80837]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-1 sudo[80862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 24 09:27:34 compute-1 sudo[80862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-1 sudo[80862]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 24 09:27:34 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 24 09:27:34 compute-1 sudo[80887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:27:34 compute-1 sudo[80887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-1 sudo[80887]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-1 sudo[80912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:27:34 compute-1 sudo[80912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-1 sudo[80912]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-1 sudo[80937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:27:34 compute-1 sudo[80937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-1 sudo[80937]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-1 ceph-mon[80009]: 2.8 scrub starts
Nov 24 09:27:34 compute-1 ceph-mon[80009]: 2.8 scrub ok
Nov 24 09:27:34 compute-1 ceph-mon[80009]: osdmap e33: 3 total, 2 up, 3 in
Nov 24 09:27:34 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:34 compute-1 ceph-mon[80009]: 4.12 scrub starts
Nov 24 09:27:34 compute-1 ceph-mon[80009]: 4.12 scrub ok
Nov 24 09:27:34 compute-1 ceph-mon[80009]: pgmap v90: 193 pgs: 124 unknown, 69 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:34 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:34 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:34 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:27:34 compute-1 ceph-mon[80009]: Adjusting osd_memory_target on compute-2 to 127.9M
Nov 24 09:27:34 compute-1 ceph-mon[80009]: Unable to set osd_memory_target on compute-2 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Nov 24 09:27:34 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:34 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:27:34 compute-1 ceph-mon[80009]: Updating compute-0:/etc/ceph/ceph.conf
Nov 24 09:27:34 compute-1 ceph-mon[80009]: Updating compute-2:/etc/ceph/ceph.conf
Nov 24 09:27:34 compute-1 ceph-mon[80009]: Updating compute-1:/etc/ceph/ceph.conf
Nov 24 09:27:34 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/32884121' entity='client.admin' 
Nov 24 09:27:34 compute-1 ceph-mon[80009]: OSD bench result of 8908.221181 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 09:27:34 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:34 compute-1 sudo[80962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:34 compute-1 sudo[80962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-1 sudo[80962]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-1 sudo[80987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:27:34 compute-1 sudo[80987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-1 sudo[80987]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-1 sudo[81035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:27:34 compute-1 sudo[81035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-1 sudo[81035]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-1 sudo[81060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:27:34 compute-1 sudo[81060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-1 sudo[81060]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-1 sudo[81085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:35 compute-1 sudo[81085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:35 compute-1 sudo[81085]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:35 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e34 e34: 3 total, 3 up, 3 in
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.1f( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.1c( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.1d( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.11( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.13( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.16( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.17( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.14( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.a( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.8( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.12( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.b( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.9( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.e( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.15( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.6( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.5( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.7( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.0( empty local-lis/les=32/34 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.4( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.1( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.3( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.2( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.f( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.1e( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.c( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.d( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.18( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.19( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.1a( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.1b( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.10( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:35 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Nov 24 09:27:35 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Nov 24 09:27:36 compute-1 ceph-mon[80009]: Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:36 compute-1 ceph-mon[80009]: Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:36 compute-1 ceph-mon[80009]: 2.1d scrub starts
Nov 24 09:27:36 compute-1 ceph-mon[80009]: 2.1d scrub ok
Nov 24 09:27:36 compute-1 ceph-mon[80009]: 4.11 deep-scrub starts
Nov 24 09:27:36 compute-1 ceph-mon[80009]: 4.11 deep-scrub ok
Nov 24 09:27:36 compute-1 ceph-mon[80009]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:36 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:36 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:36 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/4251225502' entity='client.admin' 
Nov 24 09:27:36 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:36 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:36 compute-1 ceph-mon[80009]: osd.2 [v2:192.168.122.102:6800/4204763159,v1:192.168.122.102:6801/4204763159] boot
Nov 24 09:27:36 compute-1 ceph-mon[80009]: osdmap e34: 3 total, 3 up, 3 in
Nov 24 09:27:36 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:36 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:36 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:36 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:36 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:27:36 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:27:36 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:36 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/390020590' entity='client.admin' 
Nov 24 09:27:36 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e35 e35: 3 total, 3 up, 3 in
Nov 24 09:27:36 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Nov 24 09:27:36 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Nov 24 09:27:37 compute-1 ceph-mon[80009]: 2.7 scrub starts
Nov 24 09:27:37 compute-1 ceph-mon[80009]: 2.7 scrub ok
Nov 24 09:27:37 compute-1 ceph-mon[80009]: 4.14 scrub starts
Nov 24 09:27:37 compute-1 ceph-mon[80009]: 4.14 scrub ok
Nov 24 09:27:37 compute-1 ceph-mon[80009]: pgmap v92: 193 pgs: 64 peering, 129 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:27:37 compute-1 ceph-mon[80009]: osdmap e35: 3 total, 3 up, 3 in
Nov 24 09:27:37 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.9 deep-scrub starts
Nov 24 09:27:37 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.9 deep-scrub ok
Nov 24 09:27:37 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:27:38 compute-1 ceph-mon[80009]: 3.19 deep-scrub starts
Nov 24 09:27:38 compute-1 ceph-mon[80009]: 3.19 deep-scrub ok
Nov 24 09:27:38 compute-1 ceph-mon[80009]: 2.1e scrub starts
Nov 24 09:27:38 compute-1 ceph-mon[80009]: 2.1e scrub ok
Nov 24 09:27:38 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3786425625' entity='client.admin' 
Nov 24 09:27:38 compute-1 sudo[81133]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrrrmdpnchzxvgodvzebeuurnytyrirq ; /usr/bin/python3'
Nov 24 09:27:38 compute-1 sudo[81133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:38 compute-1 python3[81135]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:38 compute-1 sudo[81133]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:38 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 24 09:27:38 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 24 09:27:39 compute-1 ceph-mon[80009]: 4.15 deep-scrub starts
Nov 24 09:27:39 compute-1 ceph-mon[80009]: 4.15 deep-scrub ok
Nov 24 09:27:39 compute-1 ceph-mon[80009]: 5.1e scrub starts
Nov 24 09:27:39 compute-1 ceph-mon[80009]: 5.1e scrub ok
Nov 24 09:27:39 compute-1 ceph-mon[80009]: 2.9 deep-scrub starts
Nov 24 09:27:39 compute-1 ceph-mon[80009]: 2.9 deep-scrub ok
Nov 24 09:27:39 compute-1 ceph-mon[80009]: 4.10 scrub starts
Nov 24 09:27:39 compute-1 ceph-mon[80009]: 4.10 scrub ok
Nov 24 09:27:39 compute-1 ceph-mon[80009]: pgmap v94: 193 pgs: 64 peering, 129 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:27:39 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.4 deep-scrub starts
Nov 24 09:27:39 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.4 deep-scrub ok
Nov 24 09:27:40 compute-1 ceph-mon[80009]: 3.18 scrub starts
Nov 24 09:27:40 compute-1 ceph-mon[80009]: 3.18 scrub ok
Nov 24 09:27:40 compute-1 ceph-mon[80009]: 2.6 scrub starts
Nov 24 09:27:40 compute-1 ceph-mon[80009]: 2.6 scrub ok
Nov 24 09:27:40 compute-1 ceph-mon[80009]: 4.16 deep-scrub starts
Nov 24 09:27:40 compute-1 ceph-mon[80009]: 4.16 deep-scrub ok
Nov 24 09:27:40 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/171270571' entity='client.admin' 
Nov 24 09:27:40 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:40 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:40 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qecnjt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 09:27:40 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qecnjt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 09:27:40 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:40 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:40 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 24 09:27:40 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 24 09:27:41 compute-1 ceph-mon[80009]: 3.7 scrub starts
Nov 24 09:27:41 compute-1 ceph-mon[80009]: 3.7 scrub ok
Nov 24 09:27:41 compute-1 ceph-mon[80009]: 2.4 deep-scrub starts
Nov 24 09:27:41 compute-1 ceph-mon[80009]: 2.4 deep-scrub ok
Nov 24 09:27:41 compute-1 ceph-mon[80009]: 4.17 scrub starts
Nov 24 09:27:41 compute-1 ceph-mon[80009]: 4.17 scrub ok
Nov 24 09:27:41 compute-1 ceph-mon[80009]: pgmap v95: 193 pgs: 64 peering, 129 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:27:41 compute-1 ceph-mon[80009]: Deploying daemon rgw.rgw.compute-2.qecnjt on compute-2
Nov 24 09:27:41 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/940045969' entity='client.admin' 
Nov 24 09:27:41 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Nov 24 09:27:41 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Nov 24 09:27:42 compute-1 ceph-mon[80009]: 2.5 scrub starts
Nov 24 09:27:42 compute-1 ceph-mon[80009]: 2.5 scrub ok
Nov 24 09:27:42 compute-1 ceph-mon[80009]: 5.1 scrub starts
Nov 24 09:27:42 compute-1 ceph-mon[80009]: 5.1 scrub ok
Nov 24 09:27:42 compute-1 ceph-mon[80009]: 4.9 scrub starts
Nov 24 09:27:42 compute-1 ceph-mon[80009]: 4.9 scrub ok
Nov 24 09:27:42 compute-1 ceph-mon[80009]: 2.1 scrub starts
Nov 24 09:27:42 compute-1 ceph-mon[80009]: 4.8 scrub starts
Nov 24 09:27:42 compute-1 ceph-mon[80009]: 4.8 scrub ok
Nov 24 09:27:42 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:42 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:42 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:42 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:42 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:42 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:42 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3633642607' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 24 09:27:42 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e36 e36: 3 total, 3 up, 3 in
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.1f( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.025905609s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.679153442s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.1f( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.025840759s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.679153442s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.18( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.412052155s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.065383911s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.1d( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.028734207s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682113647s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.18( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.411995888s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.065383911s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.19( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.411870956s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.065322876s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.1d( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.028659821s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682113647s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.13( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.028512001s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682136536s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.10( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.028491974s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682136536s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.19( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.411849976s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.065322876s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.10( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.028460503s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682136536s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.15( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.411406517s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.065132141s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.15( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.411390305s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.065132141s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.11( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.028267860s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682121277s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.16( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.028317451s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682189941s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.11( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.028250694s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682121277s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.16( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.028303146s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682189941s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.13( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.411211967s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.065055847s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.13( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.028315544s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682136536s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.12( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.411434174s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.065414429s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.12( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.411421776s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.065414429s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.13( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.411125183s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.065055847s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.14( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.028162956s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682212830s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.14( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.028146744s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682212830s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.f( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.410729408s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064872742s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.10( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.410682678s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064849854s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.e( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.410472870s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064659119s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.a( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027964592s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682228088s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.10( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.410597801s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064849854s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.f( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.410598755s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064872742s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.a( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027945518s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682228088s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.e( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.410350800s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064659119s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.b( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027894020s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682312012s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.d( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.410316467s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064758301s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.8( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027781487s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682235718s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.d( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.410297394s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064758301s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.b( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027878761s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682312012s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.8( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027765274s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682235718s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.c( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409935951s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064651489s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.c( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409915924s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064651489s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.b( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409746170s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064506531s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.9( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027548790s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682327271s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.b( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409718513s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064506531s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.9( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027529716s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682327271s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.6( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027469635s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682388306s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.6( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027459145s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682388306s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.e( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027327538s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682342529s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.e( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027307510s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682342529s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.5( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027279854s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682395935s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.5( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027262688s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682395935s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.5( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409214020s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064353943s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.5( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409195900s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064353943s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.1( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409289360s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064422607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.4( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027223587s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682434082s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.1( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409217834s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064422607s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.4( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027206421s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682434082s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.4( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409045219s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064323425s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.4( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409029961s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064323425s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.6( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409012794s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064338684s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.6( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.408977509s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064338684s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.2( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027039528s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682449341s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.3( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027024269s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682441711s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.2( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027027130s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682449341s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.3( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027005196s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682441711s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.9( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.408799171s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064323425s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.9( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.408782005s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064323425s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.f( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.026929855s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682502747s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.f( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.026864052s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682502747s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.1c( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.406463623s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.062194824s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.1c( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.406445503s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.062194824s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.1e( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.026896477s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682472229s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.1d( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.408471107s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064254761s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.1b( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409080505s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064880371s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.1e( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.026688576s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682472229s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.1d( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.408452988s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064254761s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.18( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.026684761s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682548523s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.1b( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409064293s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064880371s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.18( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.026658058s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682548523s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.1b( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.026719093s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682632446s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.1b( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.026703835s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682632446s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.1f( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.406167030s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.062194824s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.a( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.408223152s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064270020s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.1f( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.406149864s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.062194824s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.1e( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.408313751s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064369202s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.a( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.408190727s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064270020s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.1e( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.408292770s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064369202s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.1f( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[4.18( empty local-lis/les=0/0 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[6.1a( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[4.1b( empty local-lis/les=0/0 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.1( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[6.19( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[4.e( empty local-lis/les=0/0 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[4.1a( empty local-lis/les=0/0 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[6.d( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[4.5( empty local-lis/les=0/0 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[6.7( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[6.3( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[3.5( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[6.2( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[6.5( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.2( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[4.d( empty local-lis/les=0/0 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[3.3( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.18( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[6.e( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.1b( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[4.a( empty local-lis/les=0/0 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.7( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[6.8( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[3.1c( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[4.c( empty local-lis/les=0/0 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.1c( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[6.15( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.f( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[6.a( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[3.a( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[4.13( empty local-lis/les=0/0 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[3.d( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[3.f( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.9( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[3.10( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.16( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[3.13( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.15( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[3.14( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.11( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.10( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[3.16( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[3.c( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-1 sudo[81149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:27:42 compute-1 sudo[81149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:42 compute-1 sudo[81149]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:42 compute-1 sudo[81174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:42 compute-1 sudo[81174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:42 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 24 09:27:42 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 24 09:27:42 compute-1 podman[81240]: 2025-11-24 09:27:42.749632812 +0000 UTC m=+0.032456876 container create ceecfc4dc914bea9dfd03020a6c8314655013b2a2829e07386b0b9759669f7a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_wozniak, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 24 09:27:42 compute-1 systemd[1]: Started libpod-conmon-ceecfc4dc914bea9dfd03020a6c8314655013b2a2829e07386b0b9759669f7a1.scope.
Nov 24 09:27:42 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:27:42 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:27:42 compute-1 podman[81240]: 2025-11-24 09:27:42.806779927 +0000 UTC m=+0.089603991 container init ceecfc4dc914bea9dfd03020a6c8314655013b2a2829e07386b0b9759669f7a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_wozniak, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:27:42 compute-1 podman[81240]: 2025-11-24 09:27:42.812074969 +0000 UTC m=+0.094899033 container start ceecfc4dc914bea9dfd03020a6c8314655013b2a2829e07386b0b9759669f7a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_wozniak, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:27:42 compute-1 gracious_wozniak[81257]: 167 167
Nov 24 09:27:42 compute-1 systemd[1]: libpod-ceecfc4dc914bea9dfd03020a6c8314655013b2a2829e07386b0b9759669f7a1.scope: Deactivated successfully.
Nov 24 09:27:42 compute-1 podman[81240]: 2025-11-24 09:27:42.817161737 +0000 UTC m=+0.099985831 container attach ceecfc4dc914bea9dfd03020a6c8314655013b2a2829e07386b0b9759669f7a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_wozniak, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:42 compute-1 podman[81240]: 2025-11-24 09:27:42.817605348 +0000 UTC m=+0.100429412 container died ceecfc4dc914bea9dfd03020a6c8314655013b2a2829e07386b0b9759669f7a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_wozniak, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 24 09:27:42 compute-1 podman[81240]: 2025-11-24 09:27:42.736362938 +0000 UTC m=+0.019187032 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:27:42 compute-1 systemd[1]: var-lib-containers-storage-overlay-696053096f122822b1c36abec3ce2ecae1b3b12abb9ec2b0d95fda9587d992bb-merged.mount: Deactivated successfully.
Nov 24 09:27:42 compute-1 podman[81240]: 2025-11-24 09:27:42.864191829 +0000 UTC m=+0.147015893 container remove ceecfc4dc914bea9dfd03020a6c8314655013b2a2829e07386b0b9759669f7a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_wozniak, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:27:42 compute-1 systemd[1]: libpod-conmon-ceecfc4dc914bea9dfd03020a6c8314655013b2a2829e07386b0b9759669f7a1.scope: Deactivated successfully.
Nov 24 09:27:42 compute-1 systemd[1]: Reloading.
Nov 24 09:27:42 compute-1 systemd-rc-local-generator[81301]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:27:42 compute-1 systemd-sysv-generator[81304]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:27:43 compute-1 ceph-mon[80009]: 2.1 scrub ok
Nov 24 09:27:43 compute-1 ceph-mon[80009]: 5.3 scrub starts
Nov 24 09:27:43 compute-1 ceph-mon[80009]: 5.3 scrub ok
Nov 24 09:27:43 compute-1 ceph-mon[80009]: pgmap v96: 193 pgs: 1 active+clean+scrubbing, 192 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:27:43 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:27:43 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:27:43 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:27:43 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:27:43 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:27:43 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:27:43 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3633642607' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 24 09:27:43 compute-1 ceph-mon[80009]: osdmap e36: 3 total, 3 up, 3 in
Nov 24 09:27:43 compute-1 ceph-mon[80009]: mgrmap e12: compute-0.mauvni(active, since 2m), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:27:43 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:43 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:43 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:43 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vproll", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 09:27:43 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vproll", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 09:27:43 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:43 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:43 compute-1 ceph-mon[80009]: 6.18 scrub starts
Nov 24 09:27:43 compute-1 ceph-mon[80009]: 6.18 scrub ok
Nov 24 09:27:43 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e37 e37: 3 total, 3 up, 3 in
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.1f( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[3.14( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.10( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.11( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[4.13( empty local-lis/les=36/37 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.15( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Nov 24 09:27:43 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/3262419163' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[3.10( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[3.16( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[6.15( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.16( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.9( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[3.f( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[6.a( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[3.c( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[3.a( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[4.d( empty local-lis/les=36/37 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[4.5( empty local-lis/les=36/37 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[3.13( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[3.d( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[6.8( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[4.a( empty local-lis/les=36/37 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.7( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.2( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[6.5( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[3.5( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.1( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[6.3( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.f( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[6.2( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[4.e( empty local-lis/les=36/37 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[3.3( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[6.7( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[6.d( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.1c( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.1b( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[3.1c( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[6.e( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[6.19( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[4.18( empty local-lis/les=36/37 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.18( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[4.1b( empty local-lis/les=36/37 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[4.c( empty local-lis/les=36/37 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[4.1a( empty local-lis/les=36/37 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[6.1a( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-1 systemd[1]: Reloading.
Nov 24 09:27:43 compute-1 systemd-rc-local-generator[81342]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:27:43 compute-1 systemd-sysv-generator[81346]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:27:43 compute-1 systemd[1]: Starting Ceph rgw.rgw.compute-1.vproll for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:27:43 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 24 09:27:43 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 24 09:27:43 compute-1 podman[81397]: 2025-11-24 09:27:43.635782511 +0000 UTC m=+0.034400165 container create 057f5f36976684027f826afccaa0c17f9c2dd2c811eb88cfaf92b11a885c1e56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-rgw-rgw-compute-1-vproll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:27:43 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d585a4ed64a9fc51f3e12cd6e5d9a195791a1f2e75e3898e650cfab54229f34e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:43 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d585a4ed64a9fc51f3e12cd6e5d9a195791a1f2e75e3898e650cfab54229f34e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:43 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d585a4ed64a9fc51f3e12cd6e5d9a195791a1f2e75e3898e650cfab54229f34e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:43 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d585a4ed64a9fc51f3e12cd6e5d9a195791a1f2e75e3898e650cfab54229f34e/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-1.vproll supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:43 compute-1 podman[81397]: 2025-11-24 09:27:43.693283435 +0000 UTC m=+0.091901099 container init 057f5f36976684027f826afccaa0c17f9c2dd2c811eb88cfaf92b11a885c1e56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-rgw-rgw-compute-1-vproll, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:27:43 compute-1 podman[81397]: 2025-11-24 09:27:43.698197478 +0000 UTC m=+0.096815132 container start 057f5f36976684027f826afccaa0c17f9c2dd2c811eb88cfaf92b11a885c1e56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-rgw-rgw-compute-1-vproll, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:27:43 compute-1 bash[81397]: 057f5f36976684027f826afccaa0c17f9c2dd2c811eb88cfaf92b11a885c1e56
Nov 24 09:27:43 compute-1 podman[81397]: 2025-11-24 09:27:43.62061774 +0000 UTC m=+0.019235424 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:27:43 compute-1 systemd[1]: Started Ceph rgw.rgw.compute-1.vproll for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:27:43 compute-1 sudo[81174]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:43 compute-1 radosgw[81417]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 24 09:27:43 compute-1 radosgw[81417]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Nov 24 09:27:43 compute-1 radosgw[81417]: framework: beast
Nov 24 09:27:43 compute-1 radosgw[81417]: framework conf key: endpoint, val: 192.168.122.101:8082
Nov 24 09:27:43 compute-1 radosgw[81417]: init_numa not setting numa affinity
Nov 24 09:27:44 compute-1 ceph-mon[80009]: Deploying daemon rgw.rgw.compute-1.vproll on compute-1
Nov 24 09:27:44 compute-1 ceph-mon[80009]: 2.1a scrub starts
Nov 24 09:27:44 compute-1 ceph-mon[80009]: 2.1a scrub ok
Nov 24 09:27:44 compute-1 ceph-mon[80009]: 5.0 scrub starts
Nov 24 09:27:44 compute-1 ceph-mon[80009]: 5.0 scrub ok
Nov 24 09:27:44 compute-1 ceph-mon[80009]: osdmap e37: 3 total, 3 up, 3 in
Nov 24 09:27:44 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3262419163' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 24 09:27:44 compute-1 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 24 09:27:44 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2662573742' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 24 09:27:44 compute-1 ceph-mon[80009]: 6.1f scrub starts
Nov 24 09:27:44 compute-1 ceph-mon[80009]: 6.1f scrub ok
Nov 24 09:27:44 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:44 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:44 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:44 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zlrxyg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 09:27:44 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zlrxyg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 09:27:44 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:44 compute-1 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:44 compute-1 ceph-mgr[80316]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 24 09:27:44 compute-1 ceph-mgr[80316]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 24 09:27:44 compute-1 ceph-mgr[80316]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 24 09:27:44 compute-1 ceph-mgr[80316]: mgr respawn  1: '-n'
Nov 24 09:27:44 compute-1 ceph-mgr[80316]: mgr respawn  2: 'mgr.compute-1.qelqsg'
Nov 24 09:27:44 compute-1 ceph-mgr[80316]: mgr respawn  3: '-f'
Nov 24 09:27:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e38 e38: 3 total, 3 up, 3 in
Nov 24 09:27:44 compute-1 radosgw[81417]: rgw main: failed to create zonegroup with (17) File exists
Nov 24 09:27:44 compute-1 sshd-session[72716]: Connection closed by 192.168.122.100 port 55836
Nov 24 09:27:44 compute-1 sshd-session[72743]: Connection closed by 192.168.122.100 port 55844
Nov 24 09:27:44 compute-1 sshd-session[72658]: Connection closed by 192.168.122.100 port 55814
Nov 24 09:27:44 compute-1 sshd-session[72772]: Connection closed by 192.168.122.100 port 55852
Nov 24 09:27:44 compute-1 sshd-session[72629]: Connection closed by 192.168.122.100 port 55808
Nov 24 09:27:44 compute-1 sshd-session[72687]: Connection closed by 192.168.122.100 port 55826
Nov 24 09:27:44 compute-1 sshd-session[72513]: Connection closed by 192.168.122.100 port 55774
Nov 24 09:27:44 compute-1 sshd-session[72600]: Connection closed by 192.168.122.100 port 55794
Nov 24 09:27:44 compute-1 sshd-session[72571]: Connection closed by 192.168.122.100 port 55788
Nov 24 09:27:44 compute-1 sshd-session[72542]: Connection closed by 192.168.122.100 port 55784
Nov 24 09:27:44 compute-1 sshd-session[72484]: Connection closed by 192.168.122.100 port 55758
Nov 24 09:27:44 compute-1 sshd-session[72483]: Connection closed by 192.168.122.100 port 55750
Nov 24 09:27:44 compute-1 sshd-session[72597]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:44 compute-1 sshd-session[72510]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:44 compute-1 systemd[1]: session-23.scope: Deactivated successfully.
Nov 24 09:27:44 compute-1 sshd-session[72478]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:44 compute-1 systemd[1]: session-26.scope: Deactivated successfully.
Nov 24 09:27:44 compute-1 systemd-logind[823]: Session 23 logged out. Waiting for processes to exit.
Nov 24 09:27:44 compute-1 systemd[1]: session-22.scope: Deactivated successfully.
Nov 24 09:27:44 compute-1 systemd-logind[823]: Session 26 logged out. Waiting for processes to exit.
Nov 24 09:27:44 compute-1 systemd-logind[823]: Session 22 logged out. Waiting for processes to exit.
Nov 24 09:27:44 compute-1 sshd-session[72655]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:44 compute-1 sshd-session[72740]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:44 compute-1 sshd-session[72461]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:44 compute-1 systemd[1]: session-20.scope: Deactivated successfully.
Nov 24 09:27:44 compute-1 sshd-session[72539]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:44 compute-1 systemd[1]: session-28.scope: Deactivated successfully.
Nov 24 09:27:44 compute-1 systemd[1]: session-31.scope: Deactivated successfully.
Nov 24 09:27:44 compute-1 systemd-logind[823]: Session 31 logged out. Waiting for processes to exit.
Nov 24 09:27:44 compute-1 systemd[1]: session-24.scope: Deactivated successfully.
Nov 24 09:27:44 compute-1 systemd-logind[823]: Session 20 logged out. Waiting for processes to exit.
Nov 24 09:27:44 compute-1 systemd-logind[823]: Session 28 logged out. Waiting for processes to exit.
Nov 24 09:27:44 compute-1 systemd-logind[823]: Session 24 logged out. Waiting for processes to exit.
Nov 24 09:27:44 compute-1 sshd-session[72626]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:44 compute-1 systemd[1]: session-27.scope: Deactivated successfully.
Nov 24 09:27:44 compute-1 sshd-session[72713]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:44 compute-1 sshd-session[72769]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:44 compute-1 systemd[1]: session-29.scope: Deactivated successfully.
Nov 24 09:27:44 compute-1 sshd-session[72684]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:44 compute-1 sshd-session[72568]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:44 compute-1 systemd[1]: session-32.scope: Deactivated successfully.
Nov 24 09:27:44 compute-1 systemd[1]: session-32.scope: Consumed 59.665s CPU time.
Nov 24 09:27:44 compute-1 systemd[1]: session-25.scope: Deactivated successfully.
Nov 24 09:27:44 compute-1 systemd-logind[823]: Removed session 23.
Nov 24 09:27:44 compute-1 systemd[1]: session-30.scope: Deactivated successfully.
Nov 24 09:27:44 compute-1 systemd-logind[823]: Session 27 logged out. Waiting for processes to exit.
Nov 24 09:27:44 compute-1 systemd-logind[823]: Session 29 logged out. Waiting for processes to exit.
Nov 24 09:27:44 compute-1 systemd-logind[823]: Session 32 logged out. Waiting for processes to exit.
Nov 24 09:27:44 compute-1 systemd-logind[823]: Session 30 logged out. Waiting for processes to exit.
Nov 24 09:27:44 compute-1 systemd-logind[823]: Session 25 logged out. Waiting for processes to exit.
Nov 24 09:27:44 compute-1 systemd-logind[823]: Removed session 26.
Nov 24 09:27:44 compute-1 systemd-logind[823]: Removed session 22.
Nov 24 09:27:44 compute-1 systemd-logind[823]: Removed session 20.
Nov 24 09:27:44 compute-1 systemd-logind[823]: Removed session 28.
Nov 24 09:27:44 compute-1 systemd-logind[823]: Removed session 31.
Nov 24 09:27:44 compute-1 systemd-logind[823]: Removed session 24.
Nov 24 09:27:44 compute-1 systemd-logind[823]: Removed session 27.
Nov 24 09:27:44 compute-1 systemd-logind[823]: Removed session 29.
Nov 24 09:27:44 compute-1 systemd-logind[823]: Removed session 32.
Nov 24 09:27:44 compute-1 systemd-logind[823]: Removed session 25.
Nov 24 09:27:44 compute-1 systemd-logind[823]: Removed session 30.
Nov 24 09:27:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: ignoring --setuser ceph since I am not root
Nov 24 09:27:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: ignoring --setgroup ceph since I am not root
Nov 24 09:27:44 compute-1 ceph-mgr[80316]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 24 09:27:44 compute-1 ceph-mgr[80316]: pidfile_write: ignore empty --pid-file
Nov 24 09:27:44 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'alerts'
Nov 24 09:27:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:44.421+0000 7fc2d4543140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 09:27:44 compute-1 ceph-mgr[80316]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 09:27:44 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'balancer'
Nov 24 09:27:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:44.501+0000 7fc2d4543140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 09:27:44 compute-1 ceph-mgr[80316]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 09:27:44 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'cephadm'
Nov 24 09:27:44 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Nov 24 09:27:44 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Nov 24 09:27:45 compute-1 ceph-mon[80009]: 7.1c scrub starts
Nov 24 09:27:45 compute-1 ceph-mon[80009]: 7.1c scrub ok
Nov 24 09:27:45 compute-1 ceph-mon[80009]: 3.8 scrub starts
Nov 24 09:27:45 compute-1 ceph-mon[80009]: 3.8 scrub ok
Nov 24 09:27:45 compute-1 ceph-mon[80009]: Deploying daemon rgw.rgw.compute-0.zlrxyg on compute-0
Nov 24 09:27:45 compute-1 ceph-mon[80009]: pgmap v99: 194 pgs: 1 unknown, 1 active+clean+scrubbing, 192 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:27:45 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2662573742' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 24 09:27:45 compute-1 ceph-mon[80009]: mgrmap e13: compute-0.mauvni(active, since 2m), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:27:45 compute-1 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 24 09:27:45 compute-1 ceph-mon[80009]: osdmap e38: 3 total, 3 up, 3 in
Nov 24 09:27:45 compute-1 ceph-mon[80009]: 6.c scrub starts
Nov 24 09:27:45 compute-1 ceph-mon[80009]: 6.c scrub ok
Nov 24 09:27:45 compute-1 ceph-mon[80009]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 09:27:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e39 e39: 3 total, 3 up, 3 in
Nov 24 09:27:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Nov 24 09:27:45 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2580956473' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 24 09:27:45 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'crash'
Nov 24 09:27:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:45.319+0000 7fc2d4543140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 09:27:45 compute-1 ceph-mgr[80316]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 09:27:45 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'dashboard'
Nov 24 09:27:45 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Nov 24 09:27:45 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Nov 24 09:27:45 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'devicehealth'
Nov 24 09:27:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:45.972+0000 7fc2d4543140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 09:27:45 compute-1 ceph-mgr[80316]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 09:27:45 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'diskprediction_local'
Nov 24 09:27:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 24 09:27:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 24 09:27:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]:   from numpy import show_config as show_numpy_config
Nov 24 09:27:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:46.137+0000 7fc2d4543140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 09:27:46 compute-1 ceph-mgr[80316]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 09:27:46 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'influx'
Nov 24 09:27:46 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e40 e40: 3 total, 3 up, 3 in
Nov 24 09:27:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:46.206+0000 7fc2d4543140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 09:27:46 compute-1 ceph-mgr[80316]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 09:27:46 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'insights'
Nov 24 09:27:46 compute-1 ceph-mon[80009]: 2.17 scrub starts
Nov 24 09:27:46 compute-1 ceph-mon[80009]: 2.17 scrub ok
Nov 24 09:27:46 compute-1 ceph-mon[80009]: 5.e deep-scrub starts
Nov 24 09:27:46 compute-1 ceph-mon[80009]: 5.e deep-scrub ok
Nov 24 09:27:46 compute-1 ceph-mon[80009]: osdmap e39: 3 total, 3 up, 3 in
Nov 24 09:27:46 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2580956473' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 24 09:27:46 compute-1 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 24 09:27:46 compute-1 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 24 09:27:46 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2761939167' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 24 09:27:46 compute-1 ceph-mon[80009]: 4.f scrub starts
Nov 24 09:27:46 compute-1 ceph-mon[80009]: 4.f scrub ok
Nov 24 09:27:46 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'iostat'
Nov 24 09:27:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:46.344+0000 7fc2d4543140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 09:27:46 compute-1 ceph-mgr[80316]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 09:27:46 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'k8sevents'
Nov 24 09:27:46 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Nov 24 09:27:46 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Nov 24 09:27:46 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'localpool'
Nov 24 09:27:46 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'mds_autoscaler'
Nov 24 09:27:47 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'mirroring'
Nov 24 09:27:47 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'nfs'
Nov 24 09:27:47 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e41 e41: 3 total, 3 up, 3 in
Nov 24 09:27:47 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Nov 24 09:27:47 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2580956473' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 09:27:47 compute-1 ceph-mon[80009]: 7.12 scrub starts
Nov 24 09:27:47 compute-1 ceph-mon[80009]: 7.12 scrub ok
Nov 24 09:27:47 compute-1 ceph-mon[80009]: 5.4 scrub starts
Nov 24 09:27:47 compute-1 ceph-mon[80009]: 5.4 scrub ok
Nov 24 09:27:47 compute-1 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 24 09:27:47 compute-1 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 24 09:27:47 compute-1 ceph-mon[80009]: osdmap e40: 3 total, 3 up, 3 in
Nov 24 09:27:47 compute-1 ceph-mon[80009]: 2.16 scrub starts
Nov 24 09:27:47 compute-1 ceph-mon[80009]: 2.16 scrub ok
Nov 24 09:27:47 compute-1 ceph-mon[80009]: 4.4 scrub starts
Nov 24 09:27:47 compute-1 ceph-mon[80009]: 4.4 scrub ok
Nov 24 09:27:47 compute-1 ceph-mon[80009]: osdmap e41: 3 total, 3 up, 3 in
Nov 24 09:27:47 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2097383266' entity='client.rgw.rgw.compute-0.zlrxyg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 09:27:47 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2580956473' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 09:27:47 compute-1 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 09:27:47 compute-1 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 09:27:47 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2761939167' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 09:27:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:47.341+0000 7fc2d4543140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 09:27:47 compute-1 ceph-mgr[80316]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 09:27:47 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'orchestrator'
Nov 24 09:27:47 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Nov 24 09:27:47 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 41 pg[10.0( empty local-lis/les=0/0 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [1] r=0 lpr=41 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:47 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Nov 24 09:27:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:47.561+0000 7fc2d4543140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 09:27:47 compute-1 ceph-mgr[80316]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 09:27:47 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'osd_perf_query'
Nov 24 09:27:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:47.639+0000 7fc2d4543140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 09:27:47 compute-1 ceph-mgr[80316]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 09:27:47 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'osd_support'
Nov 24 09:27:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:47.706+0000 7fc2d4543140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 09:27:47 compute-1 ceph-mgr[80316]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 09:27:47 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'pg_autoscaler'
Nov 24 09:27:47 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:27:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:47.788+0000 7fc2d4543140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 09:27:47 compute-1 ceph-mgr[80316]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 09:27:47 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'progress'
Nov 24 09:27:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:47.861+0000 7fc2d4543140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 09:27:47 compute-1 ceph-mgr[80316]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 09:27:47 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'prometheus'
Nov 24 09:27:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:48.208+0000 7fc2d4543140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 09:27:48 compute-1 ceph-mgr[80316]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 09:27:48 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'rbd_support'
Nov 24 09:27:48 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e42 e42: 3 total, 3 up, 3 in
Nov 24 09:27:48 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 42 pg[10.0( empty local-lis/les=41/42 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [1] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:48 compute-1 ceph-mon[80009]: 3.1d deep-scrub starts
Nov 24 09:27:48 compute-1 ceph-mon[80009]: 3.1d deep-scrub ok
Nov 24 09:27:48 compute-1 ceph-mon[80009]: 2.14 scrub starts
Nov 24 09:27:48 compute-1 ceph-mon[80009]: 2.14 scrub ok
Nov 24 09:27:48 compute-1 ceph-mon[80009]: 6.6 scrub starts
Nov 24 09:27:48 compute-1 ceph-mon[80009]: 6.6 scrub ok
Nov 24 09:27:48 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2097383266' entity='client.rgw.rgw.compute-0.zlrxyg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 24 09:27:48 compute-1 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 24 09:27:48 compute-1 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 24 09:27:48 compute-1 ceph-mon[80009]: osdmap e42: 3 total, 3 up, 3 in
Nov 24 09:27:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:48.306+0000 7fc2d4543140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 09:27:48 compute-1 ceph-mgr[80316]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 09:27:48 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'restful'
Nov 24 09:27:48 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Nov 24 09:27:48 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Nov 24 09:27:48 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'rgw'
Nov 24 09:27:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:48.744+0000 7fc2d4543140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 09:27:48 compute-1 ceph-mgr[80316]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 09:27:48 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'rook'
Nov 24 09:27:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e43 e43: 3 total, 3 up, 3 in
Nov 24 09:27:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Nov 24 09:27:49 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2580956473' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 09:27:49 compute-1 ceph-mon[80009]: 5.1a scrub starts
Nov 24 09:27:49 compute-1 ceph-mon[80009]: 5.1a scrub ok
Nov 24 09:27:49 compute-1 ceph-mon[80009]: 7.17 scrub starts
Nov 24 09:27:49 compute-1 ceph-mon[80009]: 7.17 scrub ok
Nov 24 09:27:49 compute-1 ceph-mon[80009]: 6.4 deep-scrub starts
Nov 24 09:27:49 compute-1 ceph-mon[80009]: 6.4 deep-scrub ok
Nov 24 09:27:49 compute-1 ceph-mon[80009]: osdmap e43: 3 total, 3 up, 3 in
Nov 24 09:27:49 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2097383266' entity='client.rgw.rgw.compute-0.zlrxyg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 09:27:49 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2580956473' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 09:27:49 compute-1 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 09:27:49 compute-1 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 09:27:49 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2761939167' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 09:27:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:49.304+0000 7fc2d4543140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 09:27:49 compute-1 ceph-mgr[80316]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 09:27:49 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'selftest'
Nov 24 09:27:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:49.378+0000 7fc2d4543140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 09:27:49 compute-1 ceph-mgr[80316]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 09:27:49 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'snap_schedule'
Nov 24 09:27:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:49.459+0000 7fc2d4543140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 09:27:49 compute-1 ceph-mgr[80316]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 09:27:49 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'stats'
Nov 24 09:27:49 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 24 09:27:49 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 24 09:27:49 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'status'
Nov 24 09:27:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:49.606+0000 7fc2d4543140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 09:27:49 compute-1 ceph-mgr[80316]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 09:27:49 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'telegraf'
Nov 24 09:27:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:49.678+0000 7fc2d4543140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 09:27:49 compute-1 ceph-mgr[80316]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 09:27:49 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'telemetry'
Nov 24 09:27:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:49.837+0000 7fc2d4543140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 09:27:49 compute-1 ceph-mgr[80316]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 09:27:49 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'test_orchestrator'
Nov 24 09:27:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:50.061+0000 7fc2d4543140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 09:27:50 compute-1 ceph-mgr[80316]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 09:27:50 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'volumes'
Nov 24 09:27:50 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e44 e44: 3 total, 3 up, 3 in
Nov 24 09:27:50 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Nov 24 09:27:50 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2580956473' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 09:27:50 compute-1 ceph-mon[80009]: 3.1a scrub starts
Nov 24 09:27:50 compute-1 ceph-mon[80009]: 3.1a scrub ok
Nov 24 09:27:50 compute-1 ceph-mon[80009]: 2.11 scrub starts
Nov 24 09:27:50 compute-1 ceph-mon[80009]: 2.11 scrub ok
Nov 24 09:27:50 compute-1 ceph-mon[80009]: 6.0 scrub starts
Nov 24 09:27:50 compute-1 ceph-mon[80009]: 6.0 scrub ok
Nov 24 09:27:50 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2097383266' entity='client.rgw.rgw.compute-0.zlrxyg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 24 09:27:50 compute-1 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 24 09:27:50 compute-1 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 24 09:27:50 compute-1 ceph-mon[80009]: osdmap e44: 3 total, 3 up, 3 in
Nov 24 09:27:50 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2097383266' entity='client.rgw.rgw.compute-0.zlrxyg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 09:27:50 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2580956473' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 09:27:50 compute-1 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 09:27:50 compute-1 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 09:27:50 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2761939167' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 09:27:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:50.326+0000 7fc2d4543140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 09:27:50 compute-1 ceph-mgr[80316]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 09:27:50 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'zabbix'
Nov 24 09:27:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:50.400+0000 7fc2d4543140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 09:27:50 compute-1 ceph-mgr[80316]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 09:27:50 compute-1 ceph-mgr[80316]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:27:50 compute-1 ceph-mgr[80316]: mgr load Constructed class from module: dashboard
Nov 24 09:27:50 compute-1 ceph-mgr[80316]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Nov 24 09:27:50 compute-1 ceph-mgr[80316]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 24 09:27:50 compute-1 ceph-mgr[80316]: [dashboard INFO root] Starting engine...
Nov 24 09:27:50 compute-1 ceph-mgr[80316]: ms_deliver_dispatch: unhandled message 0x55d8b984f860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Nov 24 09:27:50 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 24 09:27:50 compute-1 ceph-mgr[80316]: [dashboard INFO root] Engine started...
Nov 24 09:27:50 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 24 09:27:50 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e45 e45: 3 total, 3 up, 3 in
Nov 24 09:27:50 compute-1 radosgw[81417]: v1 topic migration: starting v1 topic migration..
Nov 24 09:27:50 compute-1 radosgw[81417]: LDAP not started since no server URIs were provided in the configuration.
Nov 24 09:27:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-rgw-rgw-compute-1-vproll[81413]: 2025-11-24T09:27:50.893+0000 7faaa9bf1980 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 24 09:27:50 compute-1 radosgw[81417]: v1 topic migration: finished v1 topic migration
Nov 24 09:27:50 compute-1 radosgw[81417]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 24 09:27:50 compute-1 radosgw[81417]: framework: beast
Nov 24 09:27:50 compute-1 radosgw[81417]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 24 09:27:50 compute-1 radosgw[81417]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 24 09:27:50 compute-1 radosgw[81417]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 24 09:27:50 compute-1 radosgw[81417]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 24 09:27:50 compute-1 radosgw[81417]: starting handler: beast
Nov 24 09:27:50 compute-1 radosgw[81417]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 09:27:50 compute-1 radosgw[81417]: mgrc service_daemon_register rgw.24191 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.101:8082,frontend_type#0=beast,hostname=compute-1,id=rgw.compute-1.vproll,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=0565e2b2-234e-414b-b909-932048ceb050,zone_name=default,zonegroup_id=5f03f326-32a0-4275-804c-1875d841eeca,zonegroup_name=default}
Nov 24 09:27:50 compute-1 radosgw[81417]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 24 09:27:50 compute-1 radosgw[81417]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 24 09:27:51 compute-1 radosgw[81417]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 24 09:27:51 compute-1 radosgw[81417]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Nov 24 09:27:51 compute-1 sshd-session[82081]: Accepted publickey for ceph-admin from 192.168.122.100 port 60824 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:27:51 compute-1 systemd-logind[823]: New session 33 of user ceph-admin.
Nov 24 09:27:51 compute-1 systemd[1]: Started Session 33 of User ceph-admin.
Nov 24 09:27:51 compute-1 sshd-session[82081]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:27:51 compute-1 ceph-mon[80009]: 3.9 scrub starts
Nov 24 09:27:51 compute-1 ceph-mon[80009]: 3.9 scrub ok
Nov 24 09:27:51 compute-1 ceph-mon[80009]: Standby manager daemon compute-2.rzcnzg restarted
Nov 24 09:27:51 compute-1 ceph-mon[80009]: Standby manager daemon compute-2.rzcnzg started
Nov 24 09:27:51 compute-1 ceph-mon[80009]: Standby manager daemon compute-1.qelqsg restarted
Nov 24 09:27:51 compute-1 ceph-mon[80009]: Standby manager daemon compute-1.qelqsg started
Nov 24 09:27:51 compute-1 ceph-mon[80009]: 7.15 scrub starts
Nov 24 09:27:51 compute-1 ceph-mon[80009]: 7.15 scrub ok
Nov 24 09:27:51 compute-1 ceph-mon[80009]: 4.0 deep-scrub starts
Nov 24 09:27:51 compute-1 ceph-mon[80009]: Active manager daemon compute-0.mauvni restarted
Nov 24 09:27:51 compute-1 ceph-mon[80009]: Activating manager daemon compute-0.mauvni
Nov 24 09:27:51 compute-1 ceph-mon[80009]: 4.0 deep-scrub ok
Nov 24 09:27:51 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2097383266' entity='client.rgw.rgw.compute-0.zlrxyg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 24 09:27:51 compute-1 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 24 09:27:51 compute-1 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 24 09:27:51 compute-1 ceph-mon[80009]: osdmap e45: 3 total, 3 up, 3 in
Nov 24 09:27:51 compute-1 ceph-mon[80009]: mgrmap e14: compute-0.mauvni(active, starting, since 0.0337997s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:27:51 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 09:27:51 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:51 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:27:51 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mauvni", "id": "compute-0.mauvni"}]: dispatch
Nov 24 09:27:51 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-2.rzcnzg", "id": "compute-2.rzcnzg"}]: dispatch
Nov 24 09:27:51 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-1.qelqsg", "id": "compute-1.qelqsg"}]: dispatch
Nov 24 09:27:51 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:27:51 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:27:51 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:51 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 09:27:51 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 09:27:51 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 09:27:51 compute-1 ceph-mon[80009]: Manager daemon compute-0.mauvni is now available
Nov 24 09:27:51 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"}]: dispatch
Nov 24 09:27:51 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"}]: dispatch
Nov 24 09:27:51 compute-1 sudo[82086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:27:51 compute-1 sudo[82086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:51 compute-1 sudo[82086]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:51 compute-1 sudo[82111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 24 09:27:51 compute-1 sudo[82111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:51 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 24 09:27:51 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 24 09:27:51 compute-1 podman[82208]: 2025-11-24 09:27:51.88508466 +0000 UTC m=+0.056424058 container exec fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 24 09:27:51 compute-1 podman[82208]: 2025-11-24 09:27:51.983170183 +0000 UTC m=+0.154509571 container exec_died fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:27:52 compute-1 ceph-mon[80009]: 3.0 scrub starts
Nov 24 09:27:52 compute-1 ceph-mon[80009]: 3.0 scrub ok
Nov 24 09:27:52 compute-1 ceph-mon[80009]: 2.3 scrub starts
Nov 24 09:27:52 compute-1 ceph-mon[80009]: 2.3 scrub ok
Nov 24 09:27:52 compute-1 ceph-mon[80009]: 4.7 scrub starts
Nov 24 09:27:52 compute-1 ceph-mon[80009]: 4.7 scrub ok
Nov 24 09:27:52 compute-1 ceph-mon[80009]: mgrmap e15: compute-0.mauvni(active, since 1.05886s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:27:52 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:52 compute-1 sudo[82111]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:52 compute-1 sudo[82315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:27:52 compute-1 sudo[82315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:52 compute-1 sudo[82315]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:52 compute-1 sudo[82340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:27:52 compute-1 sudo[82340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:52 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Nov 24 09:27:52 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Nov 24 09:27:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:27:52 compute-1 sudo[82340]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:53 compute-1 sudo[82396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:27:53 compute-1 sudo[82396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:53 compute-1 sudo[82396]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:53 compute-1 sudo[82421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Nov 24 09:27:53 compute-1 sudo[82421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:53 compute-1 ceph-mon[80009]: 5.d scrub starts
Nov 24 09:27:53 compute-1 ceph-mon[80009]: 5.d scrub ok
Nov 24 09:27:53 compute-1 ceph-mon[80009]: [24/Nov/2025:09:27:52] ENGINE Bus STARTING
Nov 24 09:27:53 compute-1 ceph-mon[80009]: [24/Nov/2025:09:27:52] ENGINE Serving on http://192.168.122.100:8765
Nov 24 09:27:53 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:53 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:53 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:53 compute-1 ceph-mon[80009]: 2.0 scrub starts
Nov 24 09:27:53 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:53 compute-1 ceph-mon[80009]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 24 09:27:53 compute-1 ceph-mon[80009]: Cluster is now healthy
Nov 24 09:27:53 compute-1 ceph-mon[80009]: 6.f scrub starts
Nov 24 09:27:53 compute-1 ceph-mon[80009]: 6.f scrub ok
Nov 24 09:27:53 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:53 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:53 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:53 compute-1 sudo[82421]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:53 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Nov 24 09:27:53 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Nov 24 09:27:54 compute-1 ceph-mon[80009]: [24/Nov/2025:09:27:52] ENGINE Serving on https://192.168.122.100:7150
Nov 24 09:27:54 compute-1 ceph-mon[80009]: [24/Nov/2025:09:27:52] ENGINE Bus STARTED
Nov 24 09:27:54 compute-1 ceph-mon[80009]: [24/Nov/2025:09:27:52] ENGINE Client ('192.168.122.100', 44150) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 09:27:54 compute-1 ceph-mon[80009]: 2.0 scrub ok
Nov 24 09:27:54 compute-1 ceph-mon[80009]: 5.b scrub starts
Nov 24 09:27:54 compute-1 ceph-mon[80009]: 5.b scrub ok
Nov 24 09:27:54 compute-1 ceph-mon[80009]: pgmap v4: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:27:54 compute-1 ceph-mon[80009]: from='client.14418 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:27:54 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:54 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:54 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:27:54 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 24 09:27:54 compute-1 ceph-mon[80009]: 7.0 scrub starts
Nov 24 09:27:54 compute-1 ceph-mon[80009]: 7.0 scrub ok
Nov 24 09:27:54 compute-1 ceph-mon[80009]: mgrmap e16: compute-0.mauvni(active, since 2s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:27:54 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:54 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:54 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:27:54 compute-1 ceph-mon[80009]: 6.9 scrub starts
Nov 24 09:27:54 compute-1 ceph-mon[80009]: 6.9 scrub ok
Nov 24 09:27:54 compute-1 ceph-mon[80009]: from='client.14430 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:27:54 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:54 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:54 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:54 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:27:54 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:54 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:27:54 compute-1 ceph-mon[80009]: Updating compute-0:/etc/ceph/ceph.conf
Nov 24 09:27:54 compute-1 ceph-mon[80009]: Updating compute-1:/etc/ceph/ceph.conf
Nov 24 09:27:54 compute-1 ceph-mon[80009]: Updating compute-2:/etc/ceph/ceph.conf
Nov 24 09:27:54 compute-1 sudo[82463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 24 09:27:54 compute-1 sudo[82463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:54 compute-1 sudo[82463]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:54 compute-1 sudo[82488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph
Nov 24 09:27:54 compute-1 sudo[82488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:54 compute-1 sudo[82488]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:54 compute-1 sudo[82513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:27:54 compute-1 sudo[82513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:54 compute-1 sudo[82513]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:54 compute-1 sudo[82538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:54 compute-1 sudo[82538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:54 compute-1 sudo[82538]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:54 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 24 09:27:54 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 24 09:27:54 compute-1 sudo[82563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:27:54 compute-1 sudo[82563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:54 compute-1 sudo[82563]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:54 compute-1 sudo[82611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:27:54 compute-1 sudo[82611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:54 compute-1 sudo[82611]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:54 compute-1 sudo[82636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:27:54 compute-1 sudo[82636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:54 compute-1 sudo[82636]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:54 compute-1 sudo[82661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 24 09:27:54 compute-1 sudo[82661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:54 compute-1 sudo[82661]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:54 compute-1 sudo[82686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:27:54 compute-1 sudo[82686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:54 compute-1 sudo[82686]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:54 compute-1 sudo[82711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:27:54 compute-1 sudo[82711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:54 compute-1 sudo[82711]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:54 compute-1 sudo[82736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:27:54 compute-1 sudo[82736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:54 compute-1 sudo[82736]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-1 sudo[82761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:55 compute-1 sudo[82761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-1 sudo[82761]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-1 sudo[82786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:27:55 compute-1 sudo[82786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-1 sudo[82786]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-1 sudo[82834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:27:55 compute-1 sudo[82834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-1 sudo[82834]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-1 sudo[82859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:27:55 compute-1 sudo[82859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-1 sudo[82859]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-1 sudo[82884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:55 compute-1 sudo[82884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-1 sudo[82884]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-1 ceph-mon[80009]: 5.8 scrub starts
Nov 24 09:27:55 compute-1 ceph-mon[80009]: 5.8 scrub ok
Nov 24 09:27:55 compute-1 ceph-mon[80009]: 2.2 scrub starts
Nov 24 09:27:55 compute-1 ceph-mon[80009]: 4.b deep-scrub starts
Nov 24 09:27:55 compute-1 ceph-mon[80009]: 4.b deep-scrub ok
Nov 24 09:27:55 compute-1 ceph-mon[80009]: pgmap v5: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:27:55 compute-1 ceph-mon[80009]: from='client.14436 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:27:55 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:55 compute-1 ceph-mon[80009]: Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:55 compute-1 ceph-mon[80009]: Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:55 compute-1 ceph-mon[80009]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:55 compute-1 sudo[82909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 24 09:27:55 compute-1 sudo[82909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-1 sudo[82909]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-1 sudo[82934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph
Nov 24 09:27:55 compute-1 sudo[82934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-1 sudo[82934]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-1 sudo[82959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:27:55 compute-1 sudo[82959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-1 sudo[82959]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 24 09:27:55 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 24 09:27:55 compute-1 sudo[82984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:55 compute-1 sudo[82984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-1 sudo[82984]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-1 sudo[83009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:27:55 compute-1 sudo[83009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-1 sudo[83009]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-1 sudo[83057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:27:55 compute-1 sudo[83057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-1 sudo[83057]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-1 sudo[83082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:27:55 compute-1 sudo[83082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-1 sudo[83082]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-1 sudo[83107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 24 09:27:55 compute-1 sudo[83107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-1 sudo[83107]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-1 sudo[83132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:27:55 compute-1 sudo[83132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-1 sudo[83132]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-1 sudo[83157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:27:55 compute-1 sudo[83157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-1 sudo[83157]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-1 sudo[83182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:27:55 compute-1 sudo[83182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-1 sudo[83182]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:56 compute-1 sudo[83207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:56 compute-1 sudo[83207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:56 compute-1 sudo[83207]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:56 compute-1 sudo[83232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:27:56 compute-1 sudo[83232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:56 compute-1 sudo[83232]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:56 compute-1 sudo[83280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:27:56 compute-1 sudo[83280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:56 compute-1 sudo[83280]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:56 compute-1 sudo[83305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:27:56 compute-1 sudo[83305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:56 compute-1 sudo[83305]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:56 compute-1 sudo[83330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:27:56 compute-1 sudo[83330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:56 compute-1 sudo[83330]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:56 compute-1 ceph-mon[80009]: 2.2 scrub ok
Nov 24 09:27:56 compute-1 ceph-mon[80009]: 3.e deep-scrub starts
Nov 24 09:27:56 compute-1 ceph-mon[80009]: 3.e deep-scrub ok
Nov 24 09:27:56 compute-1 ceph-mon[80009]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:27:56 compute-1 ceph-mon[80009]: mgrmap e17: compute-0.mauvni(active, since 4s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:27:56 compute-1 ceph-mon[80009]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:27:56 compute-1 ceph-mon[80009]: 6.b scrub starts
Nov 24 09:27:56 compute-1 ceph-mon[80009]: 6.b scrub ok
Nov 24 09:27:56 compute-1 ceph-mon[80009]: from='client.14442 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:27:56 compute-1 ceph-mon[80009]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:27:56 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:56 compute-1 ceph-mon[80009]: Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:27:56 compute-1 ceph-mon[80009]: Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:27:56 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:56 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:56 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.1 deep-scrub starts
Nov 24 09:27:56 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.1 deep-scrub ok
Nov 24 09:27:57 compute-1 ceph-mon[80009]: 7.7 scrub starts
Nov 24 09:27:57 compute-1 ceph-mon[80009]: 7.7 scrub ok
Nov 24 09:27:57 compute-1 ceph-mon[80009]: 3.11 scrub starts
Nov 24 09:27:57 compute-1 ceph-mon[80009]: 3.11 scrub ok
Nov 24 09:27:57 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:57 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:57 compute-1 ceph-mon[80009]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:27:57 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1755702997' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 24 09:27:57 compute-1 ceph-mon[80009]: 6.14 scrub starts
Nov 24 09:27:57 compute-1 ceph-mon[80009]: 6.14 scrub ok
Nov 24 09:27:57 compute-1 ceph-mon[80009]: pgmap v6: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:27:57 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:57 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:57 compute-1 ceph-mgr[80316]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 24 09:27:57 compute-1 ceph-mgr[80316]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 24 09:27:57 compute-1 ceph-mgr[80316]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 24 09:27:57 compute-1 ceph-mgr[80316]: mgr respawn  1: '-n'
Nov 24 09:27:57 compute-1 ceph-mgr[80316]: mgr respawn  2: 'mgr.compute-1.qelqsg'
Nov 24 09:27:57 compute-1 ceph-mgr[80316]: mgr respawn  3: '-f'
Nov 24 09:27:57 compute-1 ceph-mgr[80316]: mgr respawn  4: '--setuser'
Nov 24 09:27:57 compute-1 ceph-mgr[80316]: mgr respawn  5: 'ceph'
Nov 24 09:27:57 compute-1 ceph-mgr[80316]: mgr respawn  6: '--setgroup'
Nov 24 09:27:57 compute-1 ceph-mgr[80316]: mgr respawn  7: 'ceph'
Nov 24 09:27:57 compute-1 ceph-mgr[80316]: mgr respawn  8: '--default-log-to-file=false'
Nov 24 09:27:57 compute-1 ceph-mgr[80316]: mgr respawn  9: '--default-log-to-journald=true'
Nov 24 09:27:57 compute-1 ceph-mgr[80316]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 24 09:27:57 compute-1 ceph-mgr[80316]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 24 09:27:57 compute-1 ceph-mgr[80316]: mgr respawn  exe_path /proc/self/exe
Nov 24 09:27:57 compute-1 sshd-session[82085]: Connection closed by 192.168.122.100 port 60824
Nov 24 09:27:57 compute-1 sshd-session[82081]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: ignoring --setuser ceph since I am not root
Nov 24 09:27:57 compute-1 systemd[1]: session-33.scope: Deactivated successfully.
Nov 24 09:27:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: ignoring --setgroup ceph since I am not root
Nov 24 09:27:57 compute-1 systemd[1]: session-33.scope: Consumed 4.232s CPU time.
Nov 24 09:27:57 compute-1 systemd-logind[823]: Session 33 logged out. Waiting for processes to exit.
Nov 24 09:27:57 compute-1 systemd-logind[823]: Removed session 33.
Nov 24 09:27:57 compute-1 ceph-mgr[80316]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 24 09:27:57 compute-1 ceph-mgr[80316]: pidfile_write: ignore empty --pid-file
Nov 24 09:27:57 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'alerts'
Nov 24 09:27:57 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 24 09:27:57 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 24 09:27:57 compute-1 ceph-mgr[80316]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 09:27:57 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'balancer'
Nov 24 09:27:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:57.697+0000 7fe515335140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 09:27:57 compute-1 ceph-mgr[80316]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 09:27:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:57.782+0000 7fe515335140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 09:27:57 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'cephadm'
Nov 24 09:27:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:27:58 compute-1 ceph-mon[80009]: 7.1 deep-scrub starts
Nov 24 09:27:58 compute-1 ceph-mon[80009]: 7.1 deep-scrub ok
Nov 24 09:27:58 compute-1 ceph-mon[80009]: 5.12 scrub starts
Nov 24 09:27:58 compute-1 ceph-mon[80009]: 5.12 scrub ok
Nov 24 09:27:58 compute-1 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:58 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1755702997' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 24 09:27:58 compute-1 ceph-mon[80009]: mgrmap e18: compute-0.mauvni(active, since 6s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:27:58 compute-1 ceph-mon[80009]: 6.16 scrub starts
Nov 24 09:27:58 compute-1 ceph-mon[80009]: 6.16 scrub ok
Nov 24 09:27:58 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'crash'
Nov 24 09:27:58 compute-1 ceph-mgr[80316]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 09:27:58 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:58.575+0000 7fe515335140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 09:27:58 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'dashboard'
Nov 24 09:27:58 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 24 09:27:58 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 24 09:27:59 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'devicehealth'
Nov 24 09:27:59 compute-1 ceph-mgr[80316]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 09:27:59 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'diskprediction_local'
Nov 24 09:27:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:59.210+0000 7fe515335140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 09:27:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 24 09:27:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 24 09:27:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]:   from numpy import show_config as show_numpy_config
Nov 24 09:27:59 compute-1 ceph-mgr[80316]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 09:27:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:59.372+0000 7fe515335140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 09:27:59 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'influx'
Nov 24 09:27:59 compute-1 ceph-mon[80009]: 7.d scrub starts
Nov 24 09:27:59 compute-1 ceph-mon[80009]: 7.d scrub ok
Nov 24 09:27:59 compute-1 ceph-mon[80009]: 5.13 deep-scrub starts
Nov 24 09:27:59 compute-1 ceph-mon[80009]: 5.13 deep-scrub ok
Nov 24 09:27:59 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/4224251278' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 24 09:27:59 compute-1 ceph-mon[80009]: 6.11 scrub starts
Nov 24 09:27:59 compute-1 ceph-mon[80009]: 6.11 scrub ok
Nov 24 09:27:59 compute-1 ceph-mgr[80316]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 09:27:59 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'insights'
Nov 24 09:27:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:59.449+0000 7fe515335140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 09:27:59 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'iostat'
Nov 24 09:27:59 compute-1 ceph-mgr[80316]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 09:27:59 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'k8sevents'
Nov 24 09:27:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:59.591+0000 7fe515335140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 09:27:59 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 24 09:27:59 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 24 09:27:59 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'localpool'
Nov 24 09:28:00 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'mds_autoscaler'
Nov 24 09:28:00 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'mirroring'
Nov 24 09:28:00 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'nfs'
Nov 24 09:28:00 compute-1 ceph-mon[80009]: 7.c scrub starts
Nov 24 09:28:00 compute-1 ceph-mon[80009]: 7.c scrub ok
Nov 24 09:28:00 compute-1 ceph-mon[80009]: 3.15 scrub starts
Nov 24 09:28:00 compute-1 ceph-mon[80009]: 3.15 scrub ok
Nov 24 09:28:00 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/4224251278' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 24 09:28:00 compute-1 ceph-mon[80009]: mgrmap e19: compute-0.mauvni(active, since 8s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:28:00 compute-1 ceph-mon[80009]: 6.10 deep-scrub starts
Nov 24 09:28:00 compute-1 ceph-mon[80009]: 6.10 deep-scrub ok
Nov 24 09:28:00 compute-1 ceph-mgr[80316]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 09:28:00 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'orchestrator'
Nov 24 09:28:00 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:00.602+0000 7fe515335140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 09:28:00 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 24 09:28:00 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 24 09:28:00 compute-1 ceph-mgr[80316]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 09:28:00 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:00.833+0000 7fe515335140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 09:28:00 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'osd_perf_query'
Nov 24 09:28:00 compute-1 ceph-mgr[80316]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 09:28:00 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'osd_support'
Nov 24 09:28:00 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:00.908+0000 7fe515335140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 09:28:00 compute-1 ceph-mgr[80316]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 09:28:00 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:00.983+0000 7fe515335140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 09:28:00 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'pg_autoscaler'
Nov 24 09:28:01 compute-1 ceph-mgr[80316]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 09:28:01 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'progress'
Nov 24 09:28:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:01.066+0000 7fe515335140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 09:28:01 compute-1 ceph-mgr[80316]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 09:28:01 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'prometheus'
Nov 24 09:28:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:01.142+0000 7fe515335140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 09:28:01 compute-1 ceph-mon[80009]: 2.13 scrub starts
Nov 24 09:28:01 compute-1 ceph-mon[80009]: 2.13 scrub ok
Nov 24 09:28:01 compute-1 ceph-mon[80009]: 7.19 scrub starts
Nov 24 09:28:01 compute-1 ceph-mon[80009]: 7.19 scrub ok
Nov 24 09:28:01 compute-1 ceph-mon[80009]: 7.11 scrub starts
Nov 24 09:28:01 compute-1 ceph-mon[80009]: 7.11 scrub ok
Nov 24 09:28:01 compute-1 ceph-mon[80009]: 6.13 scrub starts
Nov 24 09:28:01 compute-1 ceph-mon[80009]: 6.13 scrub ok
Nov 24 09:28:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:01.497+0000 7fe515335140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 09:28:01 compute-1 ceph-mgr[80316]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 09:28:01 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'rbd_support'
Nov 24 09:28:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:01.589+0000 7fe515335140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 09:28:01 compute-1 ceph-mgr[80316]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 09:28:01 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'restful'
Nov 24 09:28:01 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 24 09:28:01 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 24 09:28:01 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'rgw'
Nov 24 09:28:02 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:02.014+0000 7fe515335140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 09:28:02 compute-1 ceph-mgr[80316]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 09:28:02 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'rook'
Nov 24 09:28:02 compute-1 ceph-mon[80009]: 7.1a scrub starts
Nov 24 09:28:02 compute-1 ceph-mon[80009]: 7.1a scrub ok
Nov 24 09:28:02 compute-1 ceph-mon[80009]: 7.16 deep-scrub starts
Nov 24 09:28:02 compute-1 ceph-mon[80009]: 7.16 deep-scrub ok
Nov 24 09:28:02 compute-1 ceph-mon[80009]: 6.1d scrub starts
Nov 24 09:28:02 compute-1 ceph-mon[80009]: 6.1d scrub ok
Nov 24 09:28:02 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:02.589+0000 7fe515335140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 09:28:02 compute-1 ceph-mgr[80316]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 09:28:02 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'selftest'
Nov 24 09:28:02 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:02.660+0000 7fe515335140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 09:28:02 compute-1 ceph-mgr[80316]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 09:28:02 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'snap_schedule'
Nov 24 09:28:02 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:02.739+0000 7fe515335140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 09:28:02 compute-1 ceph-mgr[80316]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 09:28:02 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'stats'
Nov 24 09:28:02 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 24 09:28:02 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 24 09:28:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:28:02 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'status'
Nov 24 09:28:02 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:02.888+0000 7fe515335140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 09:28:02 compute-1 ceph-mgr[80316]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 09:28:02 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'telegraf'
Nov 24 09:28:02 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:02.955+0000 7fe515335140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 09:28:02 compute-1 ceph-mgr[80316]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 09:28:02 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'telemetry'
Nov 24 09:28:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:03.108+0000 7fe515335140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 09:28:03 compute-1 ceph-mgr[80316]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 09:28:03 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'test_orchestrator'
Nov 24 09:28:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:03.335+0000 7fe515335140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 09:28:03 compute-1 ceph-mgr[80316]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 09:28:03 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'volumes'
Nov 24 09:28:03 compute-1 ceph-mon[80009]: 5.1f scrub starts
Nov 24 09:28:03 compute-1 ceph-mon[80009]: 5.1f scrub ok
Nov 24 09:28:03 compute-1 ceph-mon[80009]: 2.10 scrub starts
Nov 24 09:28:03 compute-1 ceph-mon[80009]: 2.10 scrub ok
Nov 24 09:28:03 compute-1 ceph-mon[80009]: 7.1b scrub starts
Nov 24 09:28:03 compute-1 ceph-mon[80009]: 7.1b scrub ok
Nov 24 09:28:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:03.612+0000 7fe515335140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 09:28:03 compute-1 ceph-mgr[80316]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 09:28:03 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'zabbix'
Nov 24 09:28:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:03.684+0000 7fe515335140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 09:28:03 compute-1 ceph-mgr[80316]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 09:28:03 compute-1 ceph-mgr[80316]: ms_deliver_dispatch: unhandled message 0x56539b5e3860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Nov 24 09:28:03 compute-1 ceph-mgr[80316]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 24 09:28:03 compute-1 ceph-mgr[80316]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 24 09:28:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: ignoring --setuser ceph since I am not root
Nov 24 09:28:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: ignoring --setgroup ceph since I am not root
Nov 24 09:28:03 compute-1 ceph-mgr[80316]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 24 09:28:03 compute-1 ceph-mgr[80316]: pidfile_write: ignore empty --pid-file
Nov 24 09:28:03 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.10 deep-scrub starts
Nov 24 09:28:03 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.10 deep-scrub ok
Nov 24 09:28:03 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'alerts'
Nov 24 09:28:03 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e46 e46: 3 total, 3 up, 3 in
Nov 24 09:28:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:03.903+0000 7f9a8ed44140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 09:28:03 compute-1 ceph-mgr[80316]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 09:28:03 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'balancer'
Nov 24 09:28:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:03.987+0000 7f9a8ed44140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 09:28:03 compute-1 ceph-mgr[80316]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 09:28:03 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'cephadm'
Nov 24 09:28:04 compute-1 ceph-mon[80009]: 3.14 scrub starts
Nov 24 09:28:04 compute-1 ceph-mon[80009]: 3.14 scrub ok
Nov 24 09:28:04 compute-1 ceph-mon[80009]: 7.14 scrub starts
Nov 24 09:28:04 compute-1 ceph-mon[80009]: 7.14 scrub ok
Nov 24 09:28:04 compute-1 ceph-mon[80009]: Standby manager daemon compute-2.rzcnzg restarted
Nov 24 09:28:04 compute-1 ceph-mon[80009]: Standby manager daemon compute-2.rzcnzg started
Nov 24 09:28:04 compute-1 ceph-mon[80009]: Standby manager daemon compute-1.qelqsg restarted
Nov 24 09:28:04 compute-1 ceph-mon[80009]: Standby manager daemon compute-1.qelqsg started
Nov 24 09:28:04 compute-1 ceph-mon[80009]: 3.1f deep-scrub starts
Nov 24 09:28:04 compute-1 ceph-mon[80009]: 3.1f deep-scrub ok
Nov 24 09:28:04 compute-1 ceph-mon[80009]: Active manager daemon compute-0.mauvni restarted
Nov 24 09:28:04 compute-1 ceph-mon[80009]: Activating manager daemon compute-0.mauvni
Nov 24 09:28:04 compute-1 ceph-mon[80009]: osdmap e46: 3 total, 3 up, 3 in
Nov 24 09:28:04 compute-1 ceph-mon[80009]: mgrmap e20: compute-0.mauvni(active, starting, since 0.0304016s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:28:04 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'crash'
Nov 24 09:28:04 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 24 09:28:04 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 24 09:28:04 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:04.848+0000 7f9a8ed44140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 09:28:04 compute-1 ceph-mgr[80316]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 09:28:04 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'dashboard'
Nov 24 09:28:05 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'devicehealth'
Nov 24 09:28:05 compute-1 ceph-mon[80009]: 5.10 deep-scrub starts
Nov 24 09:28:05 compute-1 ceph-mon[80009]: 5.10 deep-scrub ok
Nov 24 09:28:05 compute-1 ceph-mon[80009]: 2.c scrub starts
Nov 24 09:28:05 compute-1 ceph-mon[80009]: 2.c scrub ok
Nov 24 09:28:05 compute-1 ceph-mon[80009]: 3.1e scrub starts
Nov 24 09:28:05 compute-1 ceph-mon[80009]: 3.1e scrub ok
Nov 24 09:28:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:05.489+0000 7f9a8ed44140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 09:28:05 compute-1 ceph-mgr[80316]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 09:28:05 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'diskprediction_local'
Nov 24 09:28:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 24 09:28:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 24 09:28:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]:   from numpy import show_config as show_numpy_config
Nov 24 09:28:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:05.658+0000 7f9a8ed44140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 09:28:05 compute-1 ceph-mgr[80316]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 09:28:05 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'influx'
Nov 24 09:28:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:05.733+0000 7f9a8ed44140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 09:28:05 compute-1 ceph-mgr[80316]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 09:28:05 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'insights'
Nov 24 09:28:05 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 24 09:28:05 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 24 09:28:05 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'iostat'
Nov 24 09:28:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:05.872+0000 7f9a8ed44140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 09:28:05 compute-1 ceph-mgr[80316]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 09:28:05 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'k8sevents'
Nov 24 09:28:06 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'localpool'
Nov 24 09:28:06 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'mds_autoscaler'
Nov 24 09:28:06 compute-1 ceph-mon[80009]: 4.13 scrub starts
Nov 24 09:28:06 compute-1 ceph-mon[80009]: 4.13 scrub ok
Nov 24 09:28:06 compute-1 ceph-mon[80009]: 2.f deep-scrub starts
Nov 24 09:28:06 compute-1 ceph-mon[80009]: 2.f deep-scrub ok
Nov 24 09:28:06 compute-1 ceph-mon[80009]: 2.19 scrub starts
Nov 24 09:28:06 compute-1 ceph-mon[80009]: 2.19 scrub ok
Nov 24 09:28:06 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'mirroring'
Nov 24 09:28:06 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'nfs'
Nov 24 09:28:06 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 24 09:28:06 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 24 09:28:06 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:06.865+0000 7f9a8ed44140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 09:28:06 compute-1 ceph-mgr[80316]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 09:28:06 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'orchestrator'
Nov 24 09:28:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:07.076+0000 7f9a8ed44140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-1 ceph-mgr[80316]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'osd_perf_query'
Nov 24 09:28:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:07.155+0000 7f9a8ed44140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-1 ceph-mgr[80316]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'osd_support'
Nov 24 09:28:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:07.218+0000 7f9a8ed44140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-1 ceph-mgr[80316]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'pg_autoscaler'
Nov 24 09:28:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:07.293+0000 7f9a8ed44140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-1 ceph-mgr[80316]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'progress'
Nov 24 09:28:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:07.363+0000 7f9a8ed44140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-1 ceph-mgr[80316]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'prometheus'
Nov 24 09:28:07 compute-1 ceph-mon[80009]: 5.15 scrub starts
Nov 24 09:28:07 compute-1 ceph-mon[80009]: 5.15 scrub ok
Nov 24 09:28:07 compute-1 ceph-mon[80009]: 2.15 scrub starts
Nov 24 09:28:07 compute-1 ceph-mon[80009]: 2.15 scrub ok
Nov 24 09:28:07 compute-1 ceph-mon[80009]: 5.1d scrub starts
Nov 24 09:28:07 compute-1 ceph-mon[80009]: 5.1d scrub ok
Nov 24 09:28:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:07.716+0000 7f9a8ed44140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-1 ceph-mgr[80316]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'rbd_support'
Nov 24 09:28:07 compute-1 systemd[1]: Stopping User Manager for UID 42477...
Nov 24 09:28:07 compute-1 systemd[72465]: Activating special unit Exit the Session...
Nov 24 09:28:07 compute-1 systemd[72465]: Stopped target Main User Target.
Nov 24 09:28:07 compute-1 systemd[72465]: Stopped target Basic System.
Nov 24 09:28:07 compute-1 systemd[72465]: Stopped target Paths.
Nov 24 09:28:07 compute-1 systemd[72465]: Stopped target Sockets.
Nov 24 09:28:07 compute-1 systemd[72465]: Stopped target Timers.
Nov 24 09:28:07 compute-1 systemd[72465]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 24 09:28:07 compute-1 systemd[72465]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 24 09:28:07 compute-1 systemd[72465]: Closed D-Bus User Message Bus Socket.
Nov 24 09:28:07 compute-1 systemd[72465]: Stopped Create User's Volatile Files and Directories.
Nov 24 09:28:07 compute-1 systemd[72465]: Removed slice User Application Slice.
Nov 24 09:28:07 compute-1 systemd[72465]: Reached target Shutdown.
Nov 24 09:28:07 compute-1 systemd[72465]: Finished Exit the Session.
Nov 24 09:28:07 compute-1 systemd[72465]: Reached target Exit the Session.
Nov 24 09:28:07 compute-1 systemd[1]: user@42477.service: Deactivated successfully.
Nov 24 09:28:07 compute-1 systemd[1]: Stopped User Manager for UID 42477.
Nov 24 09:28:07 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 24 09:28:07 compute-1 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 24 09:28:07 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:28:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:07.819+0000 7f9a8ed44140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-1 ceph-mgr[80316]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'restful'
Nov 24 09:28:07 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Nov 24 09:28:07 compute-1 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 24 09:28:07 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 24 09:28:07 compute-1 systemd[1]: Removed slice User Slice of UID 42477.
Nov 24 09:28:07 compute-1 systemd[1]: user-42477.slice: Consumed 1min 5.149s CPU time.
Nov 24 09:28:07 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Nov 24 09:28:08 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'rgw'
Nov 24 09:28:08 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:08.264+0000 7f9a8ed44140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 09:28:08 compute-1 ceph-mgr[80316]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 09:28:08 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'rook'
Nov 24 09:28:08 compute-1 ceph-mon[80009]: 5.11 scrub starts
Nov 24 09:28:08 compute-1 ceph-mon[80009]: 5.11 scrub ok
Nov 24 09:28:08 compute-1 ceph-mon[80009]: 7.a scrub starts
Nov 24 09:28:08 compute-1 ceph-mon[80009]: 7.a scrub ok
Nov 24 09:28:08 compute-1 ceph-mon[80009]: 7.1e deep-scrub starts
Nov 24 09:28:08 compute-1 ceph-mon[80009]: 7.1e deep-scrub ok
Nov 24 09:28:08 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:08.824+0000 7f9a8ed44140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 09:28:08 compute-1 ceph-mgr[80316]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 09:28:08 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'selftest'
Nov 24 09:28:08 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:08.897+0000 7f9a8ed44140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 09:28:08 compute-1 ceph-mgr[80316]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 09:28:08 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'snap_schedule'
Nov 24 09:28:08 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Nov 24 09:28:08 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Nov 24 09:28:08 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:08.988+0000 7f9a8ed44140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 09:28:08 compute-1 ceph-mgr[80316]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 09:28:08 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'stats'
Nov 24 09:28:09 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'status'
Nov 24 09:28:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:09.129+0000 7f9a8ed44140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 09:28:09 compute-1 ceph-mgr[80316]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 09:28:09 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'telegraf'
Nov 24 09:28:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:09.201+0000 7f9a8ed44140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 09:28:09 compute-1 ceph-mgr[80316]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 09:28:09 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'telemetry'
Nov 24 09:28:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:09.356+0000 7f9a8ed44140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 09:28:09 compute-1 ceph-mgr[80316]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 09:28:09 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'test_orchestrator'
Nov 24 09:28:09 compute-1 ceph-mon[80009]: 3.10 scrub starts
Nov 24 09:28:09 compute-1 ceph-mon[80009]: 3.10 scrub ok
Nov 24 09:28:09 compute-1 ceph-mon[80009]: 2.1b scrub starts
Nov 24 09:28:09 compute-1 ceph-mon[80009]: 2.1b scrub ok
Nov 24 09:28:09 compute-1 ceph-mon[80009]: 5.5 deep-scrub starts
Nov 24 09:28:09 compute-1 ceph-mon[80009]: 5.5 deep-scrub ok
Nov 24 09:28:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:09.578+0000 7f9a8ed44140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 09:28:09 compute-1 ceph-mgr[80316]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 09:28:09 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'volumes'
Nov 24 09:28:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:09.835+0000 7f9a8ed44140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 09:28:09 compute-1 ceph-mgr[80316]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 09:28:09 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'zabbix'
Nov 24 09:28:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:09.905+0000 7f9a8ed44140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 09:28:09 compute-1 ceph-mgr[80316]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 09:28:09 compute-1 ceph-mgr[80316]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:28:09 compute-1 ceph-mgr[80316]: mgr load Constructed class from module: dashboard
Nov 24 09:28:09 compute-1 ceph-mgr[80316]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Nov 24 09:28:09 compute-1 ceph-mgr[80316]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 24 09:28:09 compute-1 ceph-mgr[80316]: [dashboard INFO root] Starting engine...
Nov 24 09:28:09 compute-1 ceph-mgr[80316]: ms_deliver_dispatch: unhandled message 0x55d9f1a15860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Nov 24 09:28:09 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 24 09:28:09 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 24 09:28:10 compute-1 ceph-mgr[80316]: [dashboard INFO root] Engine started...
Nov 24 09:28:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e47 e47: 3 total, 3 up, 3 in
Nov 24 09:28:10 compute-1 ceph-mon[80009]: 6.15 scrub starts
Nov 24 09:28:10 compute-1 ceph-mon[80009]: 6.15 scrub ok
Nov 24 09:28:10 compute-1 ceph-mon[80009]: 7.1f scrub starts
Nov 24 09:28:10 compute-1 ceph-mon[80009]: 7.1f scrub ok
Nov 24 09:28:10 compute-1 ceph-mon[80009]: 7.13 scrub starts
Nov 24 09:28:10 compute-1 ceph-mon[80009]: 7.13 scrub ok
Nov 24 09:28:10 compute-1 ceph-mon[80009]: Standby manager daemon compute-2.rzcnzg restarted
Nov 24 09:28:10 compute-1 ceph-mon[80009]: Standby manager daemon compute-2.rzcnzg started
Nov 24 09:28:10 compute-1 ceph-mon[80009]: Standby manager daemon compute-1.qelqsg restarted
Nov 24 09:28:10 compute-1 ceph-mon[80009]: Standby manager daemon compute-1.qelqsg started
Nov 24 09:28:10 compute-1 ceph-mon[80009]: Active manager daemon compute-0.mauvni restarted
Nov 24 09:28:10 compute-1 ceph-mon[80009]: Activating manager daemon compute-0.mauvni
Nov 24 09:28:10 compute-1 ceph-mon[80009]: osdmap e47: 3 total, 3 up, 3 in
Nov 24 09:28:10 compute-1 ceph-mon[80009]: mgrmap e21: compute-0.mauvni(active, starting, since 0.0317391s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:28:10 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 09:28:10 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:28:10 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:28:10 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mauvni", "id": "compute-0.mauvni"}]: dispatch
Nov 24 09:28:10 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-2.rzcnzg", "id": "compute-2.rzcnzg"}]: dispatch
Nov 24 09:28:10 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-1.qelqsg", "id": "compute-1.qelqsg"}]: dispatch
Nov 24 09:28:10 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:28:10 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:28:10 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:28:10 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 09:28:10 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 09:28:10 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 09:28:10 compute-1 ceph-mon[80009]: Manager daemon compute-0.mauvni is now available
Nov 24 09:28:10 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"}]: dispatch
Nov 24 09:28:10 compute-1 sshd-session[83431]: Accepted publickey for ceph-admin from 192.168.122.100 port 57780 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:28:10 compute-1 systemd-logind[823]: New session 34 of user ceph-admin.
Nov 24 09:28:10 compute-1 systemd[1]: Created slice User Slice of UID 42477.
Nov 24 09:28:10 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 24 09:28:10 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 24 09:28:10 compute-1 systemd[1]: Starting User Manager for UID 42477...
Nov 24 09:28:10 compute-1 systemd[83435]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:28:10 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 24 09:28:10 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 24 09:28:10 compute-1 systemd[83435]: Queued start job for default target Main User Target.
Nov 24 09:28:10 compute-1 systemd[83435]: Created slice User Application Slice.
Nov 24 09:28:10 compute-1 systemd[83435]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 24 09:28:10 compute-1 systemd[83435]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 09:28:10 compute-1 systemd[83435]: Reached target Paths.
Nov 24 09:28:10 compute-1 systemd[83435]: Reached target Timers.
Nov 24 09:28:10 compute-1 systemd[83435]: Starting D-Bus User Message Bus Socket...
Nov 24 09:28:10 compute-1 systemd[83435]: Starting Create User's Volatile Files and Directories...
Nov 24 09:28:11 compute-1 systemd[83435]: Finished Create User's Volatile Files and Directories.
Nov 24 09:28:11 compute-1 systemd[83435]: Listening on D-Bus User Message Bus Socket.
Nov 24 09:28:11 compute-1 systemd[83435]: Reached target Sockets.
Nov 24 09:28:11 compute-1 systemd[83435]: Reached target Basic System.
Nov 24 09:28:11 compute-1 systemd[83435]: Reached target Main User Target.
Nov 24 09:28:11 compute-1 systemd[83435]: Startup finished in 118ms.
Nov 24 09:28:11 compute-1 systemd[1]: Started User Manager for UID 42477.
Nov 24 09:28:11 compute-1 systemd[1]: Started Session 34 of User ceph-admin.
Nov 24 09:28:11 compute-1 sshd-session[83431]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:28:11 compute-1 sudo[83451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:28:11 compute-1 sudo[83451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:11 compute-1 sudo[83451]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:11 compute-1 sudo[83476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 24 09:28:11 compute-1 sudo[83476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:11 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).mds e2 new map
Nov 24 09:28:11 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).mds e2 print_map
                                           e2
                                           btime 2025-11-24T09:28:11:441297+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-24T09:28:11.441245+0000
                                           modified        2025-11-24T09:28:11.441245+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Nov 24 09:28:11 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e48 e48: 3 total, 3 up, 3 in
Nov 24 09:28:11 compute-1 ceph-mon[80009]: 5.9 scrub starts
Nov 24 09:28:11 compute-1 ceph-mon[80009]: 5.9 scrub ok
Nov 24 09:28:11 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"}]: dispatch
Nov 24 09:28:11 compute-1 ceph-mon[80009]: 2.d scrub starts
Nov 24 09:28:11 compute-1 ceph-mon[80009]: 2.d scrub ok
Nov 24 09:28:11 compute-1 ceph-mon[80009]: 7.6 scrub starts
Nov 24 09:28:11 compute-1 ceph-mon[80009]: 7.6 scrub ok
Nov 24 09:28:11 compute-1 ceph-mon[80009]: mgrmap e22: compute-0.mauvni(active, since 1.05786s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:28:11 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 24 09:28:11 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 24 09:28:11 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 24 09:28:11 compute-1 ceph-mon[80009]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 24 09:28:11 compute-1 ceph-mon[80009]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 24 09:28:11 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 24 09:28:11 compute-1 ceph-mon[80009]: osdmap e48: 3 total, 3 up, 3 in
Nov 24 09:28:11 compute-1 ceph-mon[80009]: fsmap cephfs:0
Nov 24 09:28:11 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:11 compute-1 podman[83567]: 2025-11-24 09:28:11.702530455 +0000 UTC m=+0.057333960 container exec fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:28:11 compute-1 podman[83567]: 2025-11-24 09:28:11.8058782 +0000 UTC m=+0.160681685 container exec_died fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:28:11 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.f deep-scrub starts
Nov 24 09:28:11 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.f deep-scrub ok
Nov 24 09:28:12 compute-1 sudo[83476]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:12 compute-1 sudo[83672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:28:12 compute-1 sudo[83672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:12 compute-1 sudo[83672]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:12 compute-1 sudo[83697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:28:12 compute-1 sudo[83697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:12 compute-1 ceph-mon[80009]: 3.16 scrub starts
Nov 24 09:28:12 compute-1 ceph-mon[80009]: 3.16 scrub ok
Nov 24 09:28:12 compute-1 ceph-mon[80009]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 24 09:28:12 compute-1 ceph-mon[80009]: 2.12 deep-scrub starts
Nov 24 09:28:12 compute-1 ceph-mon[80009]: 2.12 deep-scrub ok
Nov 24 09:28:12 compute-1 ceph-mon[80009]: 3.4 scrub starts
Nov 24 09:28:12 compute-1 ceph-mon[80009]: 3.4 scrub ok
Nov 24 09:28:12 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:12 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:12 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:28:12 compute-1 sudo[83697]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:12 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 24 09:28:12 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 24 09:28:12 compute-1 sudo[83753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:28:12 compute-1 sudo[83753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:12 compute-1 sudo[83753]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:12 compute-1 sudo[83778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Nov 24 09:28:12 compute-1 sudo[83778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:13 compute-1 sudo[83778]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:13 compute-1 ceph-mon[80009]: 3.f deep-scrub starts
Nov 24 09:28:13 compute-1 ceph-mon[80009]: 3.f deep-scrub ok
Nov 24 09:28:13 compute-1 ceph-mon[80009]: pgmap v5: 197 pgs: 197 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:28:13 compute-1 ceph-mon[80009]: [24/Nov/2025:09:28:12] ENGINE Bus STARTING
Nov 24 09:28:13 compute-1 ceph-mon[80009]: from='client.14508 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:28:13 compute-1 ceph-mon[80009]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 24 09:28:13 compute-1 ceph-mon[80009]: [24/Nov/2025:09:28:12] ENGINE Serving on https://192.168.122.100:7150
Nov 24 09:28:13 compute-1 ceph-mon[80009]: [24/Nov/2025:09:28:12] ENGINE Client ('192.168.122.100', 34270) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 09:28:13 compute-1 ceph-mon[80009]: 2.b scrub starts
Nov 24 09:28:13 compute-1 ceph-mon[80009]: 2.b scrub ok
Nov 24 09:28:13 compute-1 ceph-mon[80009]: [24/Nov/2025:09:28:12] ENGINE Serving on http://192.168.122.100:8765
Nov 24 09:28:13 compute-1 ceph-mon[80009]: [24/Nov/2025:09:28:12] ENGINE Bus STARTED
Nov 24 09:28:13 compute-1 ceph-mon[80009]: 3.1 scrub starts
Nov 24 09:28:13 compute-1 ceph-mon[80009]: 3.1 scrub ok
Nov 24 09:28:13 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:13 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:13 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:13 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:13 compute-1 ceph-mon[80009]: 3.c scrub starts
Nov 24 09:28:13 compute-1 ceph-mon[80009]: 3.c scrub ok
Nov 24 09:28:13 compute-1 ceph-mon[80009]: mgrmap e23: compute-0.mauvni(active, since 2s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:28:13 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:13 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:13 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:28:13 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 24 09:28:13 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 24 09:28:14 compute-1 sudo[83821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 24 09:28:14 compute-1 sudo[83821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:14 compute-1 sudo[83821]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:14 compute-1 sudo[83846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph
Nov 24 09:28:14 compute-1 sudo[83846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:14 compute-1 sudo[83846]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:14 compute-1 sudo[83871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:28:14 compute-1 sudo[83871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:14 compute-1 sudo[83871]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:14 compute-1 sudo[83896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:28:14 compute-1 sudo[83896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:14 compute-1 sudo[83896]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:14 compute-1 sudo[83921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:28:14 compute-1 sudo[83921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:14 compute-1 sudo[83921]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:14 compute-1 sudo[83969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:28:14 compute-1 sudo[83969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:14 compute-1 sudo[83969]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:14 compute-1 ceph-mon[80009]: 2.18 deep-scrub starts
Nov 24 09:28:14 compute-1 ceph-mon[80009]: 2.18 deep-scrub ok
Nov 24 09:28:14 compute-1 ceph-mon[80009]: from='client.14520 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:28:14 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Nov 24 09:28:14 compute-1 ceph-mon[80009]: 5.19 deep-scrub starts
Nov 24 09:28:14 compute-1 ceph-mon[80009]: 5.19 deep-scrub ok
Nov 24 09:28:14 compute-1 ceph-mon[80009]: 3.a scrub starts
Nov 24 09:28:14 compute-1 ceph-mon[80009]: 3.a scrub ok
Nov 24 09:28:14 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:14 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:14 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:28:14 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:14 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:14 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:28:14 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:14 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:28:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e49 e49: 3 total, 3 up, 3 in
Nov 24 09:28:14 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 49 pg[12.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:28:14 compute-1 sudo[83994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:28:14 compute-1 sudo[83994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:14 compute-1 sudo[83994]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:14 compute-1 sudo[84019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 24 09:28:14 compute-1 sudo[84019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:14 compute-1 sudo[84019]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:14 compute-1 sudo[84044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:28:14 compute-1 sudo[84044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:14 compute-1 sudo[84044]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:14 compute-1 sudo[84069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:28:14 compute-1 sudo[84069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:14 compute-1 sudo[84069]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:14 compute-1 sudo[84094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:28:14 compute-1 sudo[84094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:14 compute-1 sudo[84094]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:14 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 24 09:28:14 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 24 09:28:14 compute-1 sudo[84119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:28:14 compute-1 sudo[84119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:14 compute-1 sudo[84119]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:14 compute-1 sudo[84144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:28:14 compute-1 sudo[84144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:14 compute-1 sudo[84144]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-1 sudo[84192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:28:15 compute-1 sudo[84192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-1 sudo[84192]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-1 sudo[84217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:28:15 compute-1 sudo[84217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-1 sudo[84217]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-1 sudo[84242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:28:15 compute-1 sudo[84242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-1 sudo[84242]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-1 sudo[84267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 24 09:28:15 compute-1 sudo[84267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-1 sudo[84267]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-1 sudo[84292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph
Nov 24 09:28:15 compute-1 sudo[84292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-1 sudo[84292]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-1 sudo[84317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:28:15 compute-1 sudo[84317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-1 sudo[84317]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-1 sudo[84342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:28:15 compute-1 sudo[84342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-1 sudo[84342]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-1 sudo[84367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:28:15 compute-1 sudo[84367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-1 sudo[84367]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-1 sudo[84415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:28:15 compute-1 sudo[84415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-1 sudo[84415]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e50 e50: 3 total, 3 up, 3 in
Nov 24 09:28:15 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 50 pg[12.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:28:15 compute-1 sudo[84440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:28:15 compute-1 ceph-mon[80009]: Updating compute-0:/etc/ceph/ceph.conf
Nov 24 09:28:15 compute-1 ceph-mon[80009]: Updating compute-1:/etc/ceph/ceph.conf
Nov 24 09:28:15 compute-1 ceph-mon[80009]: Updating compute-2:/etc/ceph/ceph.conf
Nov 24 09:28:15 compute-1 ceph-mon[80009]: pgmap v6: 197 pgs: 197 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:28:15 compute-1 ceph-mon[80009]: 4.2 scrub starts
Nov 24 09:28:15 compute-1 ceph-mon[80009]: 4.2 scrub ok
Nov 24 09:28:15 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Nov 24 09:28:15 compute-1 ceph-mon[80009]: osdmap e49: 3 total, 3 up, 3 in
Nov 24 09:28:15 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Nov 24 09:28:15 compute-1 ceph-mon[80009]: Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:28:15 compute-1 ceph-mon[80009]: Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:28:15 compute-1 ceph-mon[80009]: 3.6 scrub starts
Nov 24 09:28:15 compute-1 ceph-mon[80009]: 3.6 scrub ok
Nov 24 09:28:15 compute-1 ceph-mon[80009]: 4.d scrub starts
Nov 24 09:28:15 compute-1 ceph-mon[80009]: 4.d scrub ok
Nov 24 09:28:15 compute-1 ceph-mon[80009]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:28:15 compute-1 ceph-mon[80009]: mgrmap e24: compute-0.mauvni(active, since 4s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:28:15 compute-1 sudo[84440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-1 sudo[84440]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-1 sudo[84465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 24 09:28:15 compute-1 sudo[84465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-1 sudo[84465]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-1 sudo[84490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:28:15 compute-1 sudo[84490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-1 sudo[84490]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-1 sudo[84515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:28:15 compute-1 sudo[84515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-1 sudo[84515]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-1 sudo[84540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:28:15 compute-1 sudo[84540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-1 sudo[84540]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.16 deep-scrub starts
Nov 24 09:28:15 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.16 deep-scrub ok
Nov 24 09:28:15 compute-1 sudo[84565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:28:15 compute-1 sudo[84565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-1 sudo[84565]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-1 sudo[84590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:28:15 compute-1 sudo[84590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:16 compute-1 sudo[84590]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:16 compute-1 sudo[84638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:28:16 compute-1 sudo[84638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:16 compute-1 sudo[84638]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:16 compute-1 sudo[84663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:28:16 compute-1 sudo[84663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:16 compute-1 sudo[84663]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:16 compute-1 sudo[84688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:28:16 compute-1 sudo[84688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:16 compute-1 sudo[84688]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:16 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e51 e51: 3 total, 3 up, 3 in
Nov 24 09:28:16 compute-1 ceph-mon[80009]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:28:16 compute-1 ceph-mon[80009]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:28:16 compute-1 ceph-mon[80009]: 4.1 deep-scrub starts
Nov 24 09:28:16 compute-1 ceph-mon[80009]: 4.1 deep-scrub ok
Nov 24 09:28:16 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Nov 24 09:28:16 compute-1 ceph-mon[80009]: osdmap e50: 3 total, 3 up, 3 in
Nov 24 09:28:16 compute-1 ceph-mon[80009]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 24 09:28:16 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:16 compute-1 ceph-mon[80009]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 24 09:28:16 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:16 compute-1 ceph-mon[80009]: Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:28:16 compute-1 ceph-mon[80009]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:28:16 compute-1 ceph-mon[80009]: 5.6 scrub starts
Nov 24 09:28:16 compute-1 ceph-mon[80009]: 5.6 scrub ok
Nov 24 09:28:16 compute-1 ceph-mon[80009]: Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:28:16 compute-1 ceph-mon[80009]: 5.16 deep-scrub starts
Nov 24 09:28:16 compute-1 ceph-mon[80009]: 5.16 deep-scrub ok
Nov 24 09:28:16 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:16 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:16 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:16 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:16 compute-1 ceph-mon[80009]: osdmap e51: 3 total, 3 up, 3 in
Nov 24 09:28:16 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 24 09:28:16 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 24 09:28:17 compute-1 sudo[84713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:28:17 compute-1 sudo[84713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:17 compute-1 sudo[84713]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:17 compute-1 sudo[84738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:28:17 compute-1 sudo[84738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:17 compute-1 ceph-mon[80009]: pgmap v9: 198 pgs: 1 unknown, 197 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:28:17 compute-1 ceph-mon[80009]: 4.19 scrub starts
Nov 24 09:28:17 compute-1 ceph-mon[80009]: 4.19 scrub ok
Nov 24 09:28:17 compute-1 ceph-mon[80009]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:28:17 compute-1 ceph-mon[80009]: 3.2 scrub starts
Nov 24 09:28:17 compute-1 ceph-mon[80009]: 3.2 scrub ok
Nov 24 09:28:17 compute-1 ceph-mon[80009]: 4.5 scrub starts
Nov 24 09:28:17 compute-1 ceph-mon[80009]: 4.5 scrub ok
Nov 24 09:28:17 compute-1 ceph-mon[80009]: mgrmap e25: compute-0.mauvni(active, since 6s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:28:17 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:17 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:17 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:17 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:28:17 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Nov 24 09:28:17 compute-1 systemd[1]: Reloading.
Nov 24 09:28:17 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Nov 24 09:28:18 compute-1 systemd-rc-local-generator[84831]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:28:18 compute-1 systemd-sysv-generator[84834]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:28:18 compute-1 systemd[1]: Reloading.
Nov 24 09:28:18 compute-1 systemd-rc-local-generator[84867]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:28:18 compute-1 systemd-sysv-generator[84874]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:28:18 compute-1 systemd[1]: Starting Ceph node-exporter.compute-1 for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:28:18 compute-1 bash[84926]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Nov 24 09:28:18 compute-1 ceph-mon[80009]: Deploying daemon node-exporter.compute-1 on compute-1
Nov 24 09:28:18 compute-1 ceph-mon[80009]: 4.3 scrub starts
Nov 24 09:28:18 compute-1 ceph-mon[80009]: 4.3 scrub ok
Nov 24 09:28:18 compute-1 ceph-mon[80009]: 5.c scrub starts
Nov 24 09:28:18 compute-1 ceph-mon[80009]: 5.c scrub ok
Nov 24 09:28:18 compute-1 ceph-mon[80009]: 3.13 scrub starts
Nov 24 09:28:18 compute-1 ceph-mon[80009]: 3.13 scrub ok
Nov 24 09:28:18 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1364618523' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 24 09:28:18 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1364618523' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 24 09:28:18 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 24 09:28:18 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 24 09:28:19 compute-1 bash[84926]: Getting image source signatures
Nov 24 09:28:19 compute-1 bash[84926]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Nov 24 09:28:19 compute-1 bash[84926]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Nov 24 09:28:19 compute-1 bash[84926]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Nov 24 09:28:19 compute-1 ceph-mon[80009]: pgmap v11: 198 pgs: 198 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Nov 24 09:28:19 compute-1 ceph-mon[80009]: 4.6 scrub starts
Nov 24 09:28:19 compute-1 ceph-mon[80009]: 4.6 scrub ok
Nov 24 09:28:19 compute-1 ceph-mon[80009]: 2.e scrub starts
Nov 24 09:28:19 compute-1 ceph-mon[80009]: 2.e scrub ok
Nov 24 09:28:19 compute-1 ceph-mon[80009]: 4.a scrub starts
Nov 24 09:28:19 compute-1 ceph-mon[80009]: 4.a scrub ok
Nov 24 09:28:19 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3211639658' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 09:28:19 compute-1 bash[84926]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Nov 24 09:28:19 compute-1 bash[84926]: Writing manifest to image destination
Nov 24 09:28:19 compute-1 podman[84926]: 2025-11-24 09:28:19.751639929 +0000 UTC m=+1.110649436 container create 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:28:19 compute-1 podman[84926]: 2025-11-24 09:28:19.737969786 +0000 UTC m=+1.096979323 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Nov 24 09:28:19 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9fc3d7f2d63c23da242ef46afd56f4d2787380c398999b23fcc357c97e197be/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:19 compute-1 podman[84926]: 2025-11-24 09:28:19.794878225 +0000 UTC m=+1.153887752 container init 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:28:19 compute-1 podman[84926]: 2025-11-24 09:28:19.798686651 +0000 UTC m=+1.157696158 container start 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:28:19 compute-1 bash[84926]: 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.805Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.805Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.805Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.805Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.806Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.806Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 24 09:28:19 compute-1 systemd[1]: Started Ceph node-exporter.compute-1 for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=arp
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=bcache
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=bonding
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=cpu
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=dmi
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=edac
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=entropy
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=filefd
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=hwmon
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=netclass
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=netdev
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=netstat
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=nfs
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=nvme
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=os
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=pressure
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=rapl
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=selinux
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=softnet
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=stat
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=textfile
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=thermal_zone
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=time
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=uname
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=xfs
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=zfs
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Nov 24 09:28:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Nov 24 09:28:19 compute-1 sudo[84738]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:19 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 24 09:28:19 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 24 09:28:20 compute-1 ceph-mon[80009]: 4.1d scrub starts
Nov 24 09:28:20 compute-1 ceph-mon[80009]: 4.1d scrub ok
Nov 24 09:28:20 compute-1 ceph-mon[80009]: 5.a scrub starts
Nov 24 09:28:20 compute-1 ceph-mon[80009]: 5.a scrub ok
Nov 24 09:28:20 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:20 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:20 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:20 compute-1 ceph-mon[80009]: Deploying daemon node-exporter.compute-2 on compute-2
Nov 24 09:28:20 compute-1 ceph-mon[80009]: 5.7 scrub starts
Nov 24 09:28:20 compute-1 ceph-mon[80009]: 5.7 scrub ok
Nov 24 09:28:20 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3915495198' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:28:20 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 24 09:28:20 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 24 09:28:21 compute-1 ceph-mon[80009]: pgmap v12: 198 pgs: 198 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Nov 24 09:28:21 compute-1 ceph-mon[80009]: 4.1c scrub starts
Nov 24 09:28:21 compute-1 ceph-mon[80009]: 4.1c scrub ok
Nov 24 09:28:21 compute-1 ceph-mon[80009]: 3.b scrub starts
Nov 24 09:28:21 compute-1 ceph-mon[80009]: 3.b scrub ok
Nov 24 09:28:21 compute-1 ceph-mon[80009]: 3.d scrub starts
Nov 24 09:28:21 compute-1 ceph-mon[80009]: 3.d scrub ok
Nov 24 09:28:21 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 24 09:28:21 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 24 09:28:22 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:28:22 compute-1 ceph-mon[80009]: 6.1c deep-scrub starts
Nov 24 09:28:22 compute-1 ceph-mon[80009]: 6.1c deep-scrub ok
Nov 24 09:28:22 compute-1 ceph-mon[80009]: 5.17 scrub starts
Nov 24 09:28:22 compute-1 ceph-mon[80009]: 5.17 scrub ok
Nov 24 09:28:22 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/72635421' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 24 09:28:22 compute-1 ceph-mon[80009]: 3.5 scrub starts
Nov 24 09:28:22 compute-1 ceph-mon[80009]: 3.5 scrub ok
Nov 24 09:28:22 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:22 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:22 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:22 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:22 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:28:22 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:28:22 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:22 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 24 09:28:22 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 24 09:28:23 compute-1 ceph-mon[80009]: pgmap v13: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 9 op/s
Nov 24 09:28:23 compute-1 ceph-mon[80009]: 6.1 scrub starts
Nov 24 09:28:23 compute-1 ceph-mon[80009]: 6.1 scrub ok
Nov 24 09:28:23 compute-1 ceph-mon[80009]: 3.12 scrub starts
Nov 24 09:28:23 compute-1 ceph-mon[80009]: 3.12 scrub ok
Nov 24 09:28:23 compute-1 ceph-mon[80009]: 5.f scrub starts
Nov 24 09:28:23 compute-1 ceph-mon[80009]: 5.f scrub ok
Nov 24 09:28:23 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.e deep-scrub starts
Nov 24 09:28:23 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.e deep-scrub ok
Nov 24 09:28:24 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.3 deep-scrub starts
Nov 24 09:28:24 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.3 deep-scrub ok
Nov 24 09:28:24 compute-1 ceph-mon[80009]: 3.17 scrub starts
Nov 24 09:28:24 compute-1 ceph-mon[80009]: 7.1d scrub starts
Nov 24 09:28:24 compute-1 ceph-mon[80009]: 3.17 scrub ok
Nov 24 09:28:24 compute-1 ceph-mon[80009]: 7.1d scrub ok
Nov 24 09:28:24 compute-1 ceph-mon[80009]: 4.e deep-scrub starts
Nov 24 09:28:24 compute-1 ceph-mon[80009]: 4.e deep-scrub ok
Nov 24 09:28:25 compute-1 ceph-mon[80009]: from='client.14556 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 09:28:25 compute-1 ceph-mon[80009]: pgmap v14: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s
Nov 24 09:28:25 compute-1 ceph-mon[80009]: 6.1b deep-scrub starts
Nov 24 09:28:25 compute-1 ceph-mon[80009]: 6.1b deep-scrub ok
Nov 24 09:28:25 compute-1 ceph-mon[80009]: 5.14 scrub starts
Nov 24 09:28:25 compute-1 ceph-mon[80009]: 5.14 scrub ok
Nov 24 09:28:25 compute-1 ceph-mon[80009]: 3.3 deep-scrub starts
Nov 24 09:28:25 compute-1 ceph-mon[80009]: 3.3 deep-scrub ok
Nov 24 09:28:25 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:25 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 24 09:28:26 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 24 09:28:26 compute-1 ceph-mon[80009]: 7.5 scrub starts
Nov 24 09:28:26 compute-1 ceph-mon[80009]: 7.5 scrub ok
Nov 24 09:28:26 compute-1 ceph-mon[80009]: 7.18 scrub starts
Nov 24 09:28:26 compute-1 ceph-mon[80009]: 7.18 scrub ok
Nov 24 09:28:26 compute-1 ceph-mon[80009]: 4.18 scrub starts
Nov 24 09:28:26 compute-1 ceph-mon[80009]: 4.18 scrub ok
Nov 24 09:28:27 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 24 09:28:27 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 24 09:28:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:28:28 compute-1 ceph-mon[80009]: pgmap v15: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Nov 24 09:28:28 compute-1 ceph-mon[80009]: from='client.14562 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 09:28:28 compute-1 ceph-mon[80009]: 6.1e scrub starts
Nov 24 09:28:28 compute-1 ceph-mon[80009]: 6.1e scrub ok
Nov 24 09:28:28 compute-1 ceph-mon[80009]: 7.2 scrub starts
Nov 24 09:28:28 compute-1 ceph-mon[80009]: 7.2 scrub ok
Nov 24 09:28:28 compute-1 ceph-mon[80009]: 5.18 scrub starts
Nov 24 09:28:28 compute-1 ceph-mon[80009]: 5.18 scrub ok
Nov 24 09:28:28 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:28 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:28 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:28 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:28 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.bbilht", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 24 09:28:28 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.bbilht", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 24 09:28:28 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:28 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 24 09:28:28 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 24 09:28:29 compute-1 ceph-mon[80009]: 6.17 scrub starts
Nov 24 09:28:29 compute-1 ceph-mon[80009]: 6.17 scrub ok
Nov 24 09:28:29 compute-1 ceph-mon[80009]: 7.3 scrub starts
Nov 24 09:28:29 compute-1 ceph-mon[80009]: 7.3 scrub ok
Nov 24 09:28:29 compute-1 ceph-mon[80009]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 24 09:28:29 compute-1 ceph-mon[80009]: Deploying daemon mds.cephfs.compute-2.bbilht on compute-2
Nov 24 09:28:29 compute-1 ceph-mon[80009]: from='client.14568 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 09:28:29 compute-1 ceph-mon[80009]: 4.1b scrub starts
Nov 24 09:28:29 compute-1 ceph-mon[80009]: 4.1b scrub ok
Nov 24 09:28:29 compute-1 ceph-mon[80009]: pgmap v16: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:28:29 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 24 09:28:29 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 24 09:28:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).mds e3 new map
Nov 24 09:28:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).mds e3 print_map
                                           e3
                                           btime 2025-11-24T09:28:30:031773+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-24T09:28:11.441245+0000
                                           modified        2025-11-24T09:28:11.441245+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.bbilht{-1:24181} state up:standby seq 1 addr [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] compat {c=[1],r=[1],i=[1fff]}]
Nov 24 09:28:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).mds e4 new map
Nov 24 09:28:30 compute-1 ceph-mon[80009]: 6.12 scrub starts
Nov 24 09:28:30 compute-1 ceph-mon[80009]: 6.12 scrub ok
Nov 24 09:28:30 compute-1 ceph-mon[80009]: 7.4 scrub starts
Nov 24 09:28:30 compute-1 ceph-mon[80009]: 7.4 scrub ok
Nov 24 09:28:30 compute-1 ceph-mon[80009]: 4.c scrub starts
Nov 24 09:28:30 compute-1 ceph-mon[80009]: 4.c scrub ok
Nov 24 09:28:30 compute-1 ceph-mon[80009]: from='client.14574 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 09:28:30 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:30 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:30 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:30 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.cibmfe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 24 09:28:30 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.cibmfe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 24 09:28:30 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:30 compute-1 ceph-mon[80009]: Deploying daemon mds.cephfs.compute-0.cibmfe on compute-0
Nov 24 09:28:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).mds e4 print_map
                                           e4
                                           btime 2025-11-24T09:28:30:045188+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-24T09:28:11.441245+0000
                                           modified        2025-11-24T09:28:30.045062+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24181}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-2.bbilht{0:24181} state up:creating seq 1 addr [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Nov 24 09:28:30 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 24 09:28:30 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 24 09:28:31 compute-1 ceph-mon[80009]: 7.e scrub starts
Nov 24 09:28:31 compute-1 ceph-mon[80009]: 7.e scrub ok
Nov 24 09:28:31 compute-1 ceph-mon[80009]: mds.? [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] up:boot
Nov 24 09:28:31 compute-1 ceph-mon[80009]: daemon mds.cephfs.compute-2.bbilht assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 24 09:28:31 compute-1 ceph-mon[80009]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 24 09:28:31 compute-1 ceph-mon[80009]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 24 09:28:31 compute-1 ceph-mon[80009]: Cluster is now healthy
Nov 24 09:28:31 compute-1 ceph-mon[80009]: fsmap cephfs:0 1 up:standby
Nov 24 09:28:31 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.bbilht"}]: dispatch
Nov 24 09:28:31 compute-1 ceph-mon[80009]: fsmap cephfs:1 {0=cephfs.compute-2.bbilht=up:creating}
Nov 24 09:28:31 compute-1 ceph-mon[80009]: daemon mds.cephfs.compute-2.bbilht is now active in filesystem cephfs as rank 0
Nov 24 09:28:31 compute-1 ceph-mon[80009]: 4.1a scrub starts
Nov 24 09:28:31 compute-1 ceph-mon[80009]: 4.1a scrub ok
Nov 24 09:28:31 compute-1 ceph-mon[80009]: pgmap v17: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:28:31 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).mds e5 new map
Nov 24 09:28:31 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).mds e5 print_map
                                           e5
                                           btime 2025-11-24T09:28:31:054777+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-24T09:28:11.441245+0000
                                           modified        2025-11-24T09:28:31.054773+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24181}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24181 members: 24181
                                           [mds.cephfs.compute-2.bbilht{0:24181} state up:active seq 2 addr [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Nov 24 09:28:31 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 24 09:28:31 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 24 09:28:31 compute-1 sudo[85011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:28:31 compute-1 sudo[85011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:31 compute-1 sudo[85011]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:31 compute-1 sudo[85036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:28:31 compute-1 sudo[85036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:32 compute-1 ceph-mon[80009]: 7.f scrub starts
Nov 24 09:28:32 compute-1 ceph-mon[80009]: 7.f scrub ok
Nov 24 09:28:32 compute-1 ceph-mon[80009]: mds.? [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] up:active
Nov 24 09:28:32 compute-1 ceph-mon[80009]: fsmap cephfs:1 {0=cephfs.compute-2.bbilht=up:active}
Nov 24 09:28:32 compute-1 ceph-mon[80009]: 5.1c scrub starts
Nov 24 09:28:32 compute-1 ceph-mon[80009]: 5.1c scrub ok
Nov 24 09:28:32 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:32 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:32 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:32 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vpamdk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 24 09:28:32 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2235972342' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 09:28:32 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vpamdk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 24 09:28:32 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:32 compute-1 ceph-mon[80009]: Deploying daemon mds.cephfs.compute-1.vpamdk on compute-1
Nov 24 09:28:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).mds e6 new map
Nov 24 09:28:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).mds e6 print_map
                                           e6
                                           btime 2025-11-24T09:28:32:078769+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-24T09:28:11.441245+0000
                                           modified        2025-11-24T09:28:31.054773+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24181}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24181 members: 24181
                                           [mds.cephfs.compute-2.bbilht{0:24181} state up:active seq 2 addr [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.cibmfe{-1:14586} state up:standby seq 1 addr [v2:192.168.122.100:6806/3605740467,v1:192.168.122.100:6807/3605740467] compat {c=[1],r=[1],i=[1fff]}]
Nov 24 09:28:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).mds e7 new map
Nov 24 09:28:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).mds e7 print_map
                                           e7
                                           btime 2025-11-24T09:28:32:111568+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-24T09:28:11.441245+0000
                                           modified        2025-11-24T09:28:31.054773+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24181}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24181 members: 24181
                                           [mds.cephfs.compute-2.bbilht{0:24181} state up:active seq 2 addr [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.cibmfe{-1:14586} state up:standby seq 1 addr [v2:192.168.122.100:6806/3605740467,v1:192.168.122.100:6807/3605740467] compat {c=[1],r=[1],i=[1fff]}]
Nov 24 09:28:32 compute-1 podman[85101]: 2025-11-24 09:28:32.126176392 +0000 UTC m=+0.038624370 container create edb64c9b493d914c07b76f263b5b86c8eb6bbeb9353060e7eefa042a0fe671c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 09:28:32 compute-1 systemd[1]: Started libpod-conmon-edb64c9b493d914c07b76f263b5b86c8eb6bbeb9353060e7eefa042a0fe671c6.scope.
Nov 24 09:28:32 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 24 09:28:32 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 24 09:28:32 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:28:32 compute-1 podman[85101]: 2025-11-24 09:28:32.109084123 +0000 UTC m=+0.021532121 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:28:32 compute-1 podman[85101]: 2025-11-24 09:28:32.209598777 +0000 UTC m=+0.122046775 container init edb64c9b493d914c07b76f263b5b86c8eb6bbeb9353060e7eefa042a0fe671c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mendeleev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:28:32 compute-1 podman[85101]: 2025-11-24 09:28:32.220055659 +0000 UTC m=+0.132503637 container start edb64c9b493d914c07b76f263b5b86c8eb6bbeb9353060e7eefa042a0fe671c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 24 09:28:32 compute-1 podman[85101]: 2025-11-24 09:28:32.2232452 +0000 UTC m=+0.135693218 container attach edb64c9b493d914c07b76f263b5b86c8eb6bbeb9353060e7eefa042a0fe671c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mendeleev, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:28:32 compute-1 loving_mendeleev[85117]: 167 167
Nov 24 09:28:32 compute-1 systemd[1]: libpod-edb64c9b493d914c07b76f263b5b86c8eb6bbeb9353060e7eefa042a0fe671c6.scope: Deactivated successfully.
Nov 24 09:28:32 compute-1 podman[85101]: 2025-11-24 09:28:32.22603067 +0000 UTC m=+0.138478648 container died edb64c9b493d914c07b76f263b5b86c8eb6bbeb9353060e7eefa042a0fe671c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mendeleev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 09:28:32 compute-1 systemd[1]: var-lib-containers-storage-overlay-18da3e36b28181836d4abc74ac5e3847093744d75c5fd85e00eae76586960327-merged.mount: Deactivated successfully.
Nov 24 09:28:32 compute-1 podman[85101]: 2025-11-24 09:28:32.267167032 +0000 UTC m=+0.179615010 container remove edb64c9b493d914c07b76f263b5b86c8eb6bbeb9353060e7eefa042a0fe671c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mendeleev, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 24 09:28:32 compute-1 systemd[1]: libpod-conmon-edb64c9b493d914c07b76f263b5b86c8eb6bbeb9353060e7eefa042a0fe671c6.scope: Deactivated successfully.
Nov 24 09:28:32 compute-1 systemd[1]: Reloading.
Nov 24 09:28:32 compute-1 systemd-rc-local-generator[85158]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:28:32 compute-1 systemd-sysv-generator[85162]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:28:32 compute-1 systemd[1]: Reloading.
Nov 24 09:28:32 compute-1 systemd-rc-local-generator[85200]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:28:32 compute-1 systemd-sysv-generator[85204]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:28:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:28:32 compute-1 systemd[1]: Starting Ceph mds.cephfs.compute-1.vpamdk for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:28:33 compute-1 podman[85257]: 2025-11-24 09:28:33.014485116 +0000 UTC m=+0.038831886 container create 892cb33fd9be2104c947a5c0e68b0dfb62e05fd0cb4570a0c2b4964c9a90a80a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mds-cephfs-compute-1-vpamdk, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:28:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b26d9fb83e958cfca23c4285193430c8ea587492f6f39249914e79539341c402/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b26d9fb83e958cfca23c4285193430c8ea587492f6f39249914e79539341c402/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b26d9fb83e958cfca23c4285193430c8ea587492f6f39249914e79539341c402/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b26d9fb83e958cfca23c4285193430c8ea587492f6f39249914e79539341c402/merged/var/lib/ceph/mds/ceph-cephfs.compute-1.vpamdk supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:33 compute-1 podman[85257]: 2025-11-24 09:28:33.063415644 +0000 UTC m=+0.087762434 container init 892cb33fd9be2104c947a5c0e68b0dfb62e05fd0cb4570a0c2b4964c9a90a80a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mds-cephfs-compute-1-vpamdk, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 24 09:28:33 compute-1 podman[85257]: 2025-11-24 09:28:33.069088397 +0000 UTC m=+0.093435167 container start 892cb33fd9be2104c947a5c0e68b0dfb62e05fd0cb4570a0c2b4964c9a90a80a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mds-cephfs-compute-1-vpamdk, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:28:33 compute-1 bash[85257]: 892cb33fd9be2104c947a5c0e68b0dfb62e05fd0cb4570a0c2b4964c9a90a80a
Nov 24 09:28:33 compute-1 podman[85257]: 2025-11-24 09:28:32.998181096 +0000 UTC m=+0.022527896 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:28:33 compute-1 systemd[1]: Started Ceph mds.cephfs.compute-1.vpamdk for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:28:33 compute-1 ceph-mds[85277]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 09:28:33 compute-1 ceph-mds[85277]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Nov 24 09:28:33 compute-1 ceph-mds[85277]: main not setting numa affinity
Nov 24 09:28:33 compute-1 ceph-mds[85277]: pidfile_write: ignore empty --pid-file
Nov 24 09:28:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mds-cephfs-compute-1-vpamdk[85273]: starting mds.cephfs.compute-1.vpamdk at 
Nov 24 09:28:33 compute-1 ceph-mon[80009]: 7.8 scrub starts
Nov 24 09:28:33 compute-1 ceph-mon[80009]: 7.8 scrub ok
Nov 24 09:28:33 compute-1 ceph-mon[80009]: mds.? [v2:192.168.122.100:6806/3605740467,v1:192.168.122.100:6807/3605740467] up:boot
Nov 24 09:28:33 compute-1 ceph-mon[80009]: fsmap cephfs:1 {0=cephfs.compute-2.bbilht=up:active} 1 up:standby
Nov 24 09:28:33 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.cibmfe"}]: dispatch
Nov 24 09:28:33 compute-1 ceph-mon[80009]: fsmap cephfs:1 {0=cephfs.compute-2.bbilht=up:active} 1 up:standby
Nov 24 09:28:33 compute-1 ceph-mon[80009]: 5.1b scrub starts
Nov 24 09:28:33 compute-1 ceph-mon[80009]: 5.1b scrub ok
Nov 24 09:28:33 compute-1 ceph-mon[80009]: pgmap v18: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 24 09:28:33 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Updating MDS map to version 7 from mon.2
Nov 24 09:28:33 compute-1 sudo[85036]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:33 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 24 09:28:33 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 24 09:28:33 compute-1 sudo[85296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:28:33 compute-1 sudo[85296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:33 compute-1 sudo[85296]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:33 compute-1 sudo[85321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:28:33 compute-1 sudo[85321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:34 compute-1 podman[85384]: 2025-11-24 09:28:34.07326317 +0000 UTC m=+0.066130021 container create 5911d1847a693fd6e7f537532b57040f83080e3143b0732b840898e6f8483089 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 24 09:28:34 compute-1 systemd[1]: Started libpod-conmon-5911d1847a693fd6e7f537532b57040f83080e3143b0732b840898e6f8483089.scope.
Nov 24 09:28:34 compute-1 ceph-mon[80009]: 7.b scrub starts
Nov 24 09:28:34 compute-1 ceph-mon[80009]: 7.b scrub ok
Nov 24 09:28:34 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:34 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:34 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:34 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:34 compute-1 ceph-mon[80009]: 5.2 scrub starts
Nov 24 09:28:34 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:34 compute-1 ceph-mon[80009]: 5.2 scrub ok
Nov 24 09:28:34 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:34 compute-1 ceph-mon[80009]: Creating key for client.nfs.cephfs.0.0.compute-1.vvoanr
Nov 24 09:28:34 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vvoanr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 24 09:28:34 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vvoanr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 24 09:28:34 compute-1 ceph-mon[80009]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Nov 24 09:28:34 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 24 09:28:34 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 24 09:28:34 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:34 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1660329950' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 09:28:34 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 24 09:28:34 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 24 09:28:34 compute-1 ceph-mon[80009]: Rados config object exists: conf-nfs.cephfs
Nov 24 09:28:34 compute-1 ceph-mon[80009]: Creating key for client.nfs.cephfs.0.0.compute-1.vvoanr-rgw
Nov 24 09:28:34 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vvoanr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 09:28:34 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vvoanr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 09:28:34 compute-1 ceph-mon[80009]: Bind address in nfs.cephfs.0.0.compute-1.vvoanr's ganesha conf is defaulting to empty
Nov 24 09:28:34 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:34 compute-1 ceph-mon[80009]: Deploying daemon nfs.cephfs.0.0.compute-1.vvoanr on compute-1
Nov 24 09:28:34 compute-1 ceph-mon[80009]: 7.9 deep-scrub starts
Nov 24 09:28:34 compute-1 ceph-mon[80009]: 7.9 deep-scrub ok
Nov 24 09:28:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).mds e8 new map
Nov 24 09:28:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).mds e8 print_map
                                           e8
                                           btime 2025-11-24T09:28:34:108975+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-24T09:28:11.441245+0000
                                           modified        2025-11-24T09:28:34.078187+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24181}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24181 members: 24181
                                           [mds.cephfs.compute-2.bbilht{0:24181} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.cibmfe{-1:14586} state up:standby seq 1 addr [v2:192.168.122.100:6806/3605740467,v1:192.168.122.100:6807/3605740467] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.vpamdk{-1:24302} state up:standby seq 1 addr [v2:192.168.122.101:6804/2884660857,v1:192.168.122.101:6805/2884660857] compat {c=[1],r=[1],i=[1fff]}]
Nov 24 09:28:34 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Updating MDS map to version 8 from mon.2
Nov 24 09:28:34 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Monitors have assigned me to become a standby
Nov 24 09:28:34 compute-1 podman[85384]: 2025-11-24 09:28:34.046913748 +0000 UTC m=+0.039780689 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:28:34 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:28:34 compute-1 podman[85384]: 2025-11-24 09:28:34.165526586 +0000 UTC m=+0.158393467 container init 5911d1847a693fd6e7f537532b57040f83080e3143b0732b840898e6f8483089 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_allen, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 24 09:28:34 compute-1 podman[85384]: 2025-11-24 09:28:34.175019775 +0000 UTC m=+0.167886626 container start 5911d1847a693fd6e7f537532b57040f83080e3143b0732b840898e6f8483089 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_allen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:28:34 compute-1 podman[85384]: 2025-11-24 09:28:34.179461926 +0000 UTC m=+0.172328787 container attach 5911d1847a693fd6e7f537532b57040f83080e3143b0732b840898e6f8483089 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_allen, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 09:28:34 compute-1 xenodochial_allen[85400]: 167 167
Nov 24 09:28:34 compute-1 systemd[1]: libpod-5911d1847a693fd6e7f537532b57040f83080e3143b0732b840898e6f8483089.scope: Deactivated successfully.
Nov 24 09:28:34 compute-1 podman[85384]: 2025-11-24 09:28:34.180681667 +0000 UTC m=+0.173548508 container died 5911d1847a693fd6e7f537532b57040f83080e3143b0732b840898e6f8483089 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:28:34 compute-1 systemd[1]: var-lib-containers-storage-overlay-98b261074ecf485173d59f0f0afafbfc1bde5fdc28831bffa7bdd8264a910678-merged.mount: Deactivated successfully.
Nov 24 09:28:34 compute-1 podman[85384]: 2025-11-24 09:28:34.212124806 +0000 UTC m=+0.204991647 container remove 5911d1847a693fd6e7f537532b57040f83080e3143b0732b840898e6f8483089 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Nov 24 09:28:34 compute-1 systemd[1]: libpod-conmon-5911d1847a693fd6e7f537532b57040f83080e3143b0732b840898e6f8483089.scope: Deactivated successfully.
Nov 24 09:28:34 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 24 09:28:34 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 24 09:28:34 compute-1 systemd[1]: Reloading.
Nov 24 09:28:34 compute-1 systemd-rc-local-generator[85448]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:28:34 compute-1 systemd-sysv-generator[85451]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:28:34 compute-1 systemd[1]: Reloading.
Nov 24 09:28:34 compute-1 systemd-rc-local-generator[85488]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:28:34 compute-1 systemd-sysv-generator[85491]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:28:34 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:28:34 compute-1 podman[85540]: 2025-11-24 09:28:34.994475979 +0000 UTC m=+0.036478287 container create 9e9cdce3dd47e25abe641cb56f8d264ce6aac97a311540e0d883cb63fbd43f66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:28:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e103a9c85ba1f18b9876ac5fd84e521bcc04dd1a1bcd486937b84d56161b6ec/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e103a9c85ba1f18b9876ac5fd84e521bcc04dd1a1bcd486937b84d56161b6ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e103a9c85ba1f18b9876ac5fd84e521bcc04dd1a1bcd486937b84d56161b6ec/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e103a9c85ba1f18b9876ac5fd84e521bcc04dd1a1bcd486937b84d56161b6ec/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:35 compute-1 podman[85540]: 2025-11-24 09:28:35.037742286 +0000 UTC m=+0.079744604 container init 9e9cdce3dd47e25abe641cb56f8d264ce6aac97a311540e0d883cb63fbd43f66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 09:28:35 compute-1 podman[85540]: 2025-11-24 09:28:35.043861239 +0000 UTC m=+0.085863547 container start 9e9cdce3dd47e25abe641cb56f8d264ce6aac97a311540e0d883cb63fbd43f66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 24 09:28:35 compute-1 bash[85540]: 9e9cdce3dd47e25abe641cb56f8d264ce6aac97a311540e0d883cb63fbd43f66
Nov 24 09:28:35 compute-1 podman[85540]: 2025-11-24 09:28:34.979896442 +0000 UTC m=+0.021898770 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:28:35 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:28:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:35 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:28:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:35 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:28:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:35 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:28:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:35 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:28:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:35 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:28:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:35 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:28:35 compute-1 sudo[85321]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:35 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:28:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:35 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:28:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:35 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Nov 24 09:28:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:35 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Nov 24 09:28:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:35 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:28:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:35 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:28:35 compute-1 ceph-mon[80009]: mds.? [v2:192.168.122.101:6804/2884660857,v1:192.168.122.101:6805/2884660857] up:boot
Nov 24 09:28:35 compute-1 ceph-mon[80009]: mds.? [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] up:active
Nov 24 09:28:35 compute-1 ceph-mon[80009]: fsmap cephfs:1 {0=cephfs.compute-2.bbilht=up:active} 2 up:standby
Nov 24 09:28:35 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.vpamdk"}]: dispatch
Nov 24 09:28:35 compute-1 ceph-mon[80009]: 6.a scrub starts
Nov 24 09:28:35 compute-1 ceph-mon[80009]: 6.a scrub ok
Nov 24 09:28:35 compute-1 ceph-mon[80009]: pgmap v19: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 24 09:28:35 compute-1 ceph-mon[80009]: 7.10 scrub starts
Nov 24 09:28:35 compute-1 ceph-mon[80009]: 7.10 scrub ok
Nov 24 09:28:35 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3847201331' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 24 09:28:35 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:35 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:35 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 24 09:28:35 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 24 09:28:36 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:36 compute-1 ceph-mon[80009]: Creating key for client.nfs.cephfs.1.0.compute-2.gkqxhl
Nov 24 09:28:36 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.gkqxhl", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 24 09:28:36 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.gkqxhl", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 24 09:28:36 compute-1 ceph-mon[80009]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Nov 24 09:28:36 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 24 09:28:36 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 24 09:28:36 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:36 compute-1 ceph-mon[80009]: 6.8 scrub starts
Nov 24 09:28:36 compute-1 ceph-mon[80009]: 6.8 scrub ok
Nov 24 09:28:36 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:36 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 24 09:28:36 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 24 09:28:36 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).mds e9 new map
Nov 24 09:28:36 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).mds e9 print_map
                                           e9
                                           btime 2025-11-24T09:28:36:500458+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-24T09:28:11.441245+0000
                                           modified        2025-11-24T09:28:34.078187+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24181}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24181 members: 24181
                                           [mds.cephfs.compute-2.bbilht{0:24181} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.cibmfe{-1:14586} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3605740467,v1:192.168.122.100:6807/3605740467] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.vpamdk{-1:24302} state up:standby seq 1 addr [v2:192.168.122.101:6804/2884660857,v1:192.168.122.101:6805/2884660857] compat {c=[1],r=[1],i=[1fff]}]
Nov 24 09:28:37 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 24 09:28:37 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 24 09:28:37 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).mds e10 new map
Nov 24 09:28:37 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).mds e10 print_map
                                           e10
                                           btime 2025-11-24T09:28:37:509059+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-24T09:28:11.441245+0000
                                           modified        2025-11-24T09:28:34.078187+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24181}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24181 members: 24181
                                           [mds.cephfs.compute-2.bbilht{0:24181} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.cibmfe{-1:14586} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3605740467,v1:192.168.122.100:6807/3605740467] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.vpamdk{-1:24302} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2884660857,v1:192.168.122.101:6805/2884660857] compat {c=[1],r=[1],i=[1fff]}]
Nov 24 09:28:37 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Updating MDS map to version 10 from mon.2
Nov 24 09:28:37 compute-1 ceph-mon[80009]: 6.7 scrub starts
Nov 24 09:28:37 compute-1 ceph-mon[80009]: 6.7 scrub ok
Nov 24 09:28:37 compute-1 ceph-mon[80009]: pgmap v20: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 24 09:28:37 compute-1 ceph-mon[80009]: mds.? [v2:192.168.122.100:6806/3605740467,v1:192.168.122.100:6807/3605740467] up:standby
Nov 24 09:28:37 compute-1 ceph-mon[80009]: fsmap cephfs:1 {0=cephfs.compute-2.bbilht=up:active} 2 up:standby
Nov 24 09:28:37 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2906138256' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 24 09:28:37 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000001:nfs.cephfs.0: -2
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:28:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:28:38 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Nov 24 09:28:38 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Nov 24 09:28:38 compute-1 ceph-mon[80009]: 6.5 scrub starts
Nov 24 09:28:38 compute-1 ceph-mon[80009]: 6.5 scrub ok
Nov 24 09:28:38 compute-1 ceph-mon[80009]: mds.? [v2:192.168.122.101:6804/2884660857,v1:192.168.122.101:6805/2884660857] up:standby
Nov 24 09:28:38 compute-1 ceph-mon[80009]: fsmap cephfs:1 {0=cephfs.compute-2.bbilht=up:active} 2 up:standby
Nov 24 09:28:38 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 24 09:28:38 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 24 09:28:38 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.gkqxhl-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 09:28:38 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.gkqxhl-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 09:28:38 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:39 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 24 09:28:39 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 24 09:28:39 compute-1 ceph-mon[80009]: 6.2 scrub starts
Nov 24 09:28:39 compute-1 ceph-mon[80009]: Rados config object exists: conf-nfs.cephfs
Nov 24 09:28:39 compute-1 ceph-mon[80009]: Creating key for client.nfs.cephfs.1.0.compute-2.gkqxhl-rgw
Nov 24 09:28:39 compute-1 ceph-mon[80009]: 6.2 scrub ok
Nov 24 09:28:39 compute-1 ceph-mon[80009]: Bind address in nfs.cephfs.1.0.compute-2.gkqxhl's ganesha conf is defaulting to empty
Nov 24 09:28:39 compute-1 ceph-mon[80009]: Deploying daemon nfs.cephfs.1.0.compute-2.gkqxhl on compute-2
Nov 24 09:28:39 compute-1 ceph-mon[80009]: pgmap v21: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.9 KiB/s wr, 5 op/s
Nov 24 09:28:40 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 24 09:28:40 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 24 09:28:40 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:40 : epoch 69242543 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:28:40 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:40 : epoch 69242543 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:28:40 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:40 : epoch 69242543 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:28:40 compute-1 ceph-mon[80009]: 6.3 scrub starts
Nov 24 09:28:40 compute-1 ceph-mon[80009]: 6.3 scrub ok
Nov 24 09:28:40 compute-1 ceph-mon[80009]: 6.d scrub starts
Nov 24 09:28:40 compute-1 ceph-mon[80009]: 6.d scrub ok
Nov 24 09:28:40 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:40 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:40 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:40 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ssprex", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 24 09:28:40 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ssprex", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 24 09:28:40 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 24 09:28:40 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 24 09:28:40 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:41 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 24 09:28:41 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 24 09:28:41 compute-1 ceph-mon[80009]: pgmap v22: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.9 KiB/s wr, 5 op/s
Nov 24 09:28:41 compute-1 ceph-mon[80009]: Creating key for client.nfs.cephfs.2.0.compute-0.ssprex
Nov 24 09:28:41 compute-1 ceph-mon[80009]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Nov 24 09:28:41 compute-1 ceph-mon[80009]: 6.e scrub starts
Nov 24 09:28:41 compute-1 ceph-mon[80009]: 6.e scrub ok
Nov 24 09:28:42 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Nov 24 09:28:42 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Nov 24 09:28:42 compute-1 ceph-mon[80009]: 6.19 scrub starts
Nov 24 09:28:42 compute-1 ceph-mon[80009]: 6.19 scrub ok
Nov 24 09:28:42 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:28:43 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Nov 24 09:28:43 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Nov 24 09:28:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:43 : epoch 69242543 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:28:43 compute-1 ceph-mon[80009]: pgmap v23: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 2.6 KiB/s wr, 7 op/s
Nov 24 09:28:43 compute-1 ceph-mon[80009]: 6.1a scrub starts
Nov 24 09:28:43 compute-1 ceph-mon[80009]: 6.1a scrub ok
Nov 24 09:28:43 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 24 09:28:43 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 24 09:28:44 compute-1 ceph-mon[80009]: Rados config object exists: conf-nfs.cephfs
Nov 24 09:28:44 compute-1 ceph-mon[80009]: Creating key for client.nfs.cephfs.2.0.compute-0.ssprex-rgw
Nov 24 09:28:44 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ssprex-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 09:28:44 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ssprex-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 09:28:44 compute-1 ceph-mon[80009]: Bind address in nfs.cephfs.2.0.compute-0.ssprex's ganesha conf is defaulting to empty
Nov 24 09:28:44 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:44 compute-1 ceph-mon[80009]: Deploying daemon nfs.cephfs.2.0.compute-0.ssprex on compute-0
Nov 24 09:28:45 compute-1 sudo[85610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:28:45 compute-1 sudo[85610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:45 compute-1 sudo[85610]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:45 compute-1 sudo[85635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:28:45 compute-1 sudo[85635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:45 compute-1 ceph-mon[80009]: pgmap v24: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.4 KiB/s wr, 3 op/s
Nov 24 09:28:45 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:45 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:45 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:45 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:45 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:45 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:46 : epoch 69242543 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:28:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:46 : epoch 69242543 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:28:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:46 : epoch 69242543 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:28:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:46 : epoch 69242543 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:28:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:46 : epoch 69242543 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:28:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:46 : epoch 69242543 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:28:46 compute-1 ceph-mon[80009]: Deploying daemon haproxy.nfs.cephfs.compute-1.rsdpvy on compute-1
Nov 24 09:28:47 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:28:47 compute-1 ceph-mon[80009]: pgmap v25: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.4 KiB/s wr, 3 op/s
Nov 24 09:28:48 compute-1 podman[85698]: 2025-11-24 09:28:48.098313539 +0000 UTC m=+2.366985295 container create 59beef65fd529a919f4605f2f0d2748e741e5cf4f4a4420fd7498c9ebb56efba (image=quay.io/ceph/haproxy:2.3, name=pensive_darwin)
Nov 24 09:28:48 compute-1 systemd[1]: Started libpod-conmon-59beef65fd529a919f4605f2f0d2748e741e5cf4f4a4420fd7498c9ebb56efba.scope.
Nov 24 09:28:48 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:28:48 compute-1 podman[85698]: 2025-11-24 09:28:48.085263343 +0000 UTC m=+2.353935119 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 24 09:28:48 compute-1 podman[85698]: 2025-11-24 09:28:48.20737026 +0000 UTC m=+2.476042036 container init 59beef65fd529a919f4605f2f0d2748e741e5cf4f4a4420fd7498c9ebb56efba (image=quay.io/ceph/haproxy:2.3, name=pensive_darwin)
Nov 24 09:28:48 compute-1 podman[85698]: 2025-11-24 09:28:48.214608735 +0000 UTC m=+2.483280491 container start 59beef65fd529a919f4605f2f0d2748e741e5cf4f4a4420fd7498c9ebb56efba (image=quay.io/ceph/haproxy:2.3, name=pensive_darwin)
Nov 24 09:28:48 compute-1 podman[85698]: 2025-11-24 09:28:48.217655723 +0000 UTC m=+2.486327479 container attach 59beef65fd529a919f4605f2f0d2748e741e5cf4f4a4420fd7498c9ebb56efba (image=quay.io/ceph/haproxy:2.3, name=pensive_darwin)
Nov 24 09:28:48 compute-1 pensive_darwin[85816]: 0 0
Nov 24 09:28:48 compute-1 systemd[1]: libpod-59beef65fd529a919f4605f2f0d2748e741e5cf4f4a4420fd7498c9ebb56efba.scope: Deactivated successfully.
Nov 24 09:28:48 compute-1 podman[85698]: 2025-11-24 09:28:48.220013294 +0000 UTC m=+2.488685050 container died 59beef65fd529a919f4605f2f0d2748e741e5cf4f4a4420fd7498c9ebb56efba (image=quay.io/ceph/haproxy:2.3, name=pensive_darwin)
Nov 24 09:28:48 compute-1 systemd[1]: var-lib-containers-storage-overlay-a279e187ea2889cf7c395da58f98689d2a82206cd27a240d2862838f73f6d04e-merged.mount: Deactivated successfully.
Nov 24 09:28:48 compute-1 podman[85698]: 2025-11-24 09:28:48.254417147 +0000 UTC m=+2.523088903 container remove 59beef65fd529a919f4605f2f0d2748e741e5cf4f4a4420fd7498c9ebb56efba (image=quay.io/ceph/haproxy:2.3, name=pensive_darwin)
Nov 24 09:28:48 compute-1 systemd[1]: libpod-conmon-59beef65fd529a919f4605f2f0d2748e741e5cf4f4a4420fd7498c9ebb56efba.scope: Deactivated successfully.
Nov 24 09:28:48 compute-1 systemd[1]: Reloading.
Nov 24 09:28:48 compute-1 systemd-rc-local-generator[85863]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:28:48 compute-1 systemd-sysv-generator[85866]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:28:48 compute-1 systemd[1]: Reloading.
Nov 24 09:28:48 compute-1 systemd-rc-local-generator[85903]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:28:48 compute-1 systemd-sysv-generator[85907]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:28:48 compute-1 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-1.rsdpvy for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:28:49 compute-1 podman[85960]: 2025-11-24 09:28:49.045901083 +0000 UTC m=+0.041185379 container create 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 09:28:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bdf1367ea48cd2363374826f2dcdfb82f0ff9b3deac3686e1503c26387afee3/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:49 compute-1 podman[85960]: 2025-11-24 09:28:49.103897762 +0000 UTC m=+0.099182068 container init 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 09:28:49 compute-1 podman[85960]: 2025-11-24 09:28:49.108819268 +0000 UTC m=+0.104103544 container start 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 09:28:49 compute-1 bash[85960]: 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914
Nov 24 09:28:49 compute-1 podman[85960]: 2025-11-24 09:28:49.024850531 +0000 UTC m=+0.020134837 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 24 09:28:49 compute-1 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-1.rsdpvy for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:28:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [NOTICE] 327/092849 (2) : New worker #1 (4) forked
Nov 24 09:28:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:49 : epoch 69242543 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f943c000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:28:49 compute-1 sudo[85635]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:49 compute-1 ceph-mon[80009]: pgmap v26: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.6 KiB/s wr, 4 op/s
Nov 24 09:28:49 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:49 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:49 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:50 compute-1 ceph-mon[80009]: Deploying daemon haproxy.nfs.cephfs.compute-0.jzeayf on compute-0
Nov 24 09:28:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:51 : epoch 69242543 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9430001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:28:51 compute-1 ceph-mon[80009]: pgmap v27: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 853 B/s wr, 2 op/s
Nov 24 09:28:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:28:52 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:52 : epoch 69242543 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9418000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:28:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:53 : epoch 69242543 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9414000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:28:53 compute-1 ceph-mon[80009]: pgmap v28: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Nov 24 09:28:53 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:53 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:53 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:53 compute-1 ceph-mon[80009]: Deploying daemon haproxy.nfs.cephfs.compute-2.jwgmiu on compute-2
Nov 24 09:28:54 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:54 : epoch 69242543 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9438001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:28:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:55 : epoch 69242543 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9430001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:28:55 compute-1 ceph-mon[80009]: pgmap v29: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Nov 24 09:28:56 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:56 : epoch 69242543 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94180016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:28:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:57 : epoch 69242543 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94140016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:28:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:57 : epoch 69242543 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94380025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:28:57 compute-1 ceph-mon[80009]: pgmap v30: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Nov 24 09:28:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:28:58 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:58 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:58 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:58 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:58 compute-1 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 24 09:28:58 compute-1 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 24 09:28:58 compute-1 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 24 09:28:58 compute-1 ceph-mon[80009]: Deploying daemon keepalived.nfs.cephfs.compute-2.gcugek on compute-2
Nov 24 09:28:58 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:58 : epoch 69242543 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9430001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:28:59 compute-1 kernel: ganesha.nfsd[85601]: segfault at 50 ip 00007f94ea48332e sp 00007f94b57f9210 error 4 in libntirpc.so.5.8[7f94ea468000+2c000] likely on CPU 0 (core 0, socket 0)
Nov 24 09:28:59 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:28:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:59 : epoch 69242543 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9430001c00 fd 37 proxy ignored for local
Nov 24 09:28:59 compute-1 systemd[1]: Created slice Slice /system/systemd-coredump.
Nov 24 09:28:59 compute-1 systemd[1]: Started Process Core Dump (PID 85992/UID 0).
Nov 24 09:28:59 compute-1 ceph-mon[80009]: pgmap v31: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Nov 24 09:29:00 compute-1 systemd-coredump[85993]: Process 85560 (ganesha.nfsd) of user 0 dumped core.
                                                   
                                                   Stack trace of thread 43:
                                                   #0  0x00007f94ea48332e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                   ELF object binary architecture: AMD x86-64
Nov 24 09:29:00 compute-1 systemd[1]: systemd-coredump@0-85992-0.service: Deactivated successfully.
Nov 24 09:29:00 compute-1 systemd[1]: systemd-coredump@0-85992-0.service: Consumed 1.162s CPU time.
Nov 24 09:29:00 compute-1 podman[85998]: 2025-11-24 09:29:00.432230178 +0000 UTC m=+0.026200124 container died 9e9cdce3dd47e25abe641cb56f8d264ce6aac97a311540e0d883cb63fbd43f66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:29:00 compute-1 systemd[1]: var-lib-containers-storage-overlay-7e103a9c85ba1f18b9876ac5fd84e521bcc04dd1a1bcd486937b84d56161b6ec-merged.mount: Deactivated successfully.
Nov 24 09:29:00 compute-1 podman[85998]: 2025-11-24 09:29:00.468952741 +0000 UTC m=+0.062922677 container remove 9e9cdce3dd47e25abe641cb56f8d264ce6aac97a311540e0d883cb63fbd43f66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:29:00 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:29:00 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 09:29:00 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.363s CPU time.
Nov 24 09:29:01 compute-1 ceph-mon[80009]: pgmap v32: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:29:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:29:03 compute-1 ceph-mon[80009]: pgmap v33: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 938 B/s wr, 4 op/s
Nov 24 09:29:03 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:03 compute-1 sudo[86042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:29:03 compute-1 sudo[86042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:03 compute-1 sudo[86042]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:03 compute-1 sudo[86067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:29:03 compute-1 sudo[86067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:04 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:04 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:04 compute-1 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 24 09:29:04 compute-1 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 24 09:29:04 compute-1 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 24 09:29:04 compute-1 ceph-mon[80009]: Deploying daemon keepalived.nfs.cephfs.compute-1.vrgskq on compute-1
Nov 24 09:29:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/092905 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:29:05 compute-1 ceph-mon[80009]: pgmap v34: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:29:06 compute-1 podman[86133]: 2025-11-24 09:29:06.808874377 +0000 UTC m=+2.575281473 container create 5eb0ba04212e70e1a44153e1ce1ce62d75f89d0038752e93195211af5f44406f (image=quay.io/ceph/keepalived:2.2.4, name=blissful_mclean, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.component=keepalived-container, description=keepalived for Ceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vcs-type=git, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2)
Nov 24 09:29:06 compute-1 systemd[1]: Started libpod-conmon-5eb0ba04212e70e1a44153e1ce1ce62d75f89d0038752e93195211af5f44406f.scope.
Nov 24 09:29:06 compute-1 podman[86133]: 2025-11-24 09:29:06.793158784 +0000 UTC m=+2.559565910 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 24 09:29:06 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:29:06 compute-1 podman[86133]: 2025-11-24 09:29:06.876601306 +0000 UTC m=+2.643008422 container init 5eb0ba04212e70e1a44153e1ce1ce62d75f89d0038752e93195211af5f44406f (image=quay.io/ceph/keepalived:2.2.4, name=blissful_mclean, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, com.redhat.component=keepalived-container, release=1793, version=2.2.4, distribution-scope=public, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=)
Nov 24 09:29:06 compute-1 podman[86133]: 2025-11-24 09:29:06.88882016 +0000 UTC m=+2.655227246 container start 5eb0ba04212e70e1a44153e1ce1ce62d75f89d0038752e93195211af5f44406f (image=quay.io/ceph/keepalived:2.2.4, name=blissful_mclean, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., version=2.2.4, com.redhat.component=keepalived-container, name=keepalived, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 24 09:29:06 compute-1 podman[86133]: 2025-11-24 09:29:06.89195526 +0000 UTC m=+2.658362386 container attach 5eb0ba04212e70e1a44153e1ce1ce62d75f89d0038752e93195211af5f44406f (image=quay.io/ceph/keepalived:2.2.4, name=blissful_mclean, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, com.redhat.component=keepalived-container, description=keepalived for Ceph, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 24 09:29:06 compute-1 blissful_mclean[86228]: 0 0
Nov 24 09:29:06 compute-1 systemd[1]: libpod-5eb0ba04212e70e1a44153e1ce1ce62d75f89d0038752e93195211af5f44406f.scope: Deactivated successfully.
Nov 24 09:29:06 compute-1 conmon[86228]: conmon 5eb0ba04212e70e1a441 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5eb0ba04212e70e1a44153e1ce1ce62d75f89d0038752e93195211af5f44406f.scope/container/memory.events
Nov 24 09:29:06 compute-1 podman[86133]: 2025-11-24 09:29:06.895593104 +0000 UTC m=+2.662000200 container died 5eb0ba04212e70e1a44153e1ce1ce62d75f89d0038752e93195211af5f44406f (image=quay.io/ceph/keepalived:2.2.4, name=blissful_mclean, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, description=keepalived for Ceph, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, version=2.2.4, io.openshift.expose-services=, name=keepalived, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793)
Nov 24 09:29:06 compute-1 systemd[1]: var-lib-containers-storage-overlay-98179c9c59caec17d6eae17d5473404a690072617931b347a9b2c927e84b6e18-merged.mount: Deactivated successfully.
Nov 24 09:29:06 compute-1 podman[86133]: 2025-11-24 09:29:06.927684308 +0000 UTC m=+2.694091404 container remove 5eb0ba04212e70e1a44153e1ce1ce62d75f89d0038752e93195211af5f44406f (image=quay.io/ceph/keepalived:2.2.4, name=blissful_mclean, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, com.redhat.component=keepalived-container, vcs-type=git, release=1793, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, io.openshift.expose-services=)
Nov 24 09:29:06 compute-1 systemd[1]: libpod-conmon-5eb0ba04212e70e1a44153e1ce1ce62d75f89d0038752e93195211af5f44406f.scope: Deactivated successfully.
Nov 24 09:29:06 compute-1 systemd[1]: Reloading.
Nov 24 09:29:07 compute-1 systemd-rc-local-generator[86272]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:29:07 compute-1 systemd-sysv-generator[86277]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:29:07 compute-1 systemd[1]: Reloading.
Nov 24 09:29:07 compute-1 systemd-rc-local-generator[86314]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:29:07 compute-1 systemd-sysv-generator[86318]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:29:07 compute-1 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-1.vrgskq for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:29:07 compute-1 podman[86371]: 2025-11-24 09:29:07.737192466 +0000 UTC m=+0.044474983 container create b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vcs-type=git, io.openshift.tags=Ceph keepalived, name=keepalived, build-date=2023-02-22T09:23:20, architecture=x86_64, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4)
Nov 24 09:29:07 compute-1 ceph-mon[80009]: pgmap v35: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:29:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42a6ef1b45ae9962e33169771080c20b4cc6b2ef58b42546b8e18fc48898d55a/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:07 compute-1 podman[86371]: 2025-11-24 09:29:07.785769424 +0000 UTC m=+0.093051961 container init b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, io.buildah.version=1.28.2, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20)
Nov 24 09:29:07 compute-1 podman[86371]: 2025-11-24 09:29:07.792501757 +0000 UTC m=+0.099784274 container start b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, io.buildah.version=1.28.2, io.openshift.expose-services=, vendor=Red Hat, Inc., name=keepalived, build-date=2023-02-22T09:23:20, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, description=keepalived for Ceph, vcs-type=git, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., release=1793)
Nov 24 09:29:07 compute-1 bash[86371]: b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866
Nov 24 09:29:07 compute-1 podman[86371]: 2025-11-24 09:29:07.718187659 +0000 UTC m=+0.025470216 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 24 09:29:07 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:29:07 compute-1 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-1.vrgskq for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:29:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq[86386]: Mon Nov 24 09:29:07 2025: Starting Keepalived v2.2.4 (08/21,2021)
Nov 24 09:29:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq[86386]: Mon Nov 24 09:29:07 2025: Running on Linux 5.14.0-639.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025 (built for Linux 5.14.0)
Nov 24 09:29:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq[86386]: Mon Nov 24 09:29:07 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Nov 24 09:29:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq[86386]: Mon Nov 24 09:29:07 2025: Configuration file /etc/keepalived/keepalived.conf
Nov 24 09:29:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq[86386]: Mon Nov 24 09:29:07 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Nov 24 09:29:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq[86386]: Mon Nov 24 09:29:07 2025: Starting VRRP child process, pid=4
Nov 24 09:29:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq[86386]: Mon Nov 24 09:29:07 2025: Startup complete
Nov 24 09:29:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq[86386]: Mon Nov 24 09:29:07 2025: (VI_0) Entering BACKUP STATE (init)
Nov 24 09:29:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq[86386]: Mon Nov 24 09:29:07 2025: VRRP_Script(check_backend) succeeded
Nov 24 09:29:07 compute-1 sudo[86067]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:08 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:08 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:08 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:08 compute-1 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 24 09:29:08 compute-1 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 24 09:29:08 compute-1 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 24 09:29:08 compute-1 ceph-mon[80009]: Deploying daemon keepalived.nfs.cephfs.compute-0.mglptr on compute-0
Nov 24 09:29:09 compute-1 ceph-mon[80009]: pgmap v36: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:29:10 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 1.
Nov 24 09:29:10 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:29:10 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.363s CPU time.
Nov 24 09:29:10 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:29:10 compute-1 podman[86442]: 2025-11-24 09:29:10.939858439 +0000 UTC m=+0.042883331 container create f7b0c338b36b8bdf518e2bc42241679b81bdb5d1de06a8d4b736922ad905c10c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:29:10 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a0ea98e4966eef6e9359d2b74c6f9b539ca052e7e8e3709d750180e14075b4/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:10 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a0ea98e4966eef6e9359d2b74c6f9b539ca052e7e8e3709d750180e14075b4/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:10 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a0ea98e4966eef6e9359d2b74c6f9b539ca052e7e8e3709d750180e14075b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:10 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a0ea98e4966eef6e9359d2b74c6f9b539ca052e7e8e3709d750180e14075b4/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:10 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:29:10 compute-1 podman[86442]: 2025-11-24 09:29:10.996142425 +0000 UTC m=+0.099167317 container init f7b0c338b36b8bdf518e2bc42241679b81bdb5d1de06a8d4b736922ad905c10c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 09:29:11 compute-1 podman[86442]: 2025-11-24 09:29:11.000512857 +0000 UTC m=+0.103537749 container start f7b0c338b36b8bdf518e2bc42241679b81bdb5d1de06a8d4b736922ad905c10c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 09:29:11 compute-1 bash[86442]: f7b0c338b36b8bdf518e2bc42241679b81bdb5d1de06a8d4b736922ad905c10c
Nov 24 09:29:11 compute-1 podman[86442]: 2025-11-24 09:29:10.917892506 +0000 UTC m=+0.020917418 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:29:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:29:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:29:11 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:29:11 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e52 e52: 3 total, 3 up, 3 in
Nov 24 09:29:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:29:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:29:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:29:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:29:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:29:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:29:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq[86386]: Mon Nov 24 09:29:11 2025: (VI_0) Entering MASTER STATE
Nov 24 09:29:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq[86386]: Mon Nov 24 09:29:11 2025: (VI_0) Master received advert from 192.168.122.102 with same priority 90 but higher IP address than ours
Nov 24 09:29:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq[86386]: Mon Nov 24 09:29:11 2025: (VI_0) Entering BACKUP STATE
Nov 24 09:29:12 compute-1 ceph-mon[80009]: pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:29:12 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:29:12 compute-1 ceph-mon[80009]: osdmap e52: 3 total, 3 up, 3 in
Nov 24 09:29:12 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:29:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e53 e53: 3 total, 3 up, 3 in
Nov 24 09:29:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:29:13 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:29:13 compute-1 ceph-mon[80009]: osdmap e53: 3 total, 3 up, 3 in
Nov 24 09:29:13 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:29:13 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:13 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:13 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:13 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:13 compute-1 ceph-mon[80009]: Deploying daemon alertmanager.compute-0 on compute-0
Nov 24 09:29:13 compute-1 ceph-mon[80009]: pgmap v40: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 127 B/s wr, 1 op/s
Nov 24 09:29:13 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:13 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:13 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e54 e54: 3 total, 3 up, 3 in
Nov 24 09:29:14 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:29:14 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:29:14 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:29:14 compute-1 ceph-mon[80009]: osdmap e54: 3 total, 3 up, 3 in
Nov 24 09:29:14 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:29:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e55 e55: 3 total, 3 up, 3 in
Nov 24 09:29:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e56 e56: 3 total, 3 up, 3 in
Nov 24 09:29:15 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:29:15 compute-1 ceph-mon[80009]: osdmap e55: 3 total, 3 up, 3 in
Nov 24 09:29:15 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:29:15 compute-1 ceph-mon[80009]: pgmap v43: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s
Nov 24 09:29:15 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:15 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:15 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 56 pg[10.0( v 42'48 (0'0,42'48] local-lis/les=41/42 n=8 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.097949028s) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 42'47 mlcod 42'47 active pruub 170.755828857s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:15 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 56 pg[10.0( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.097949028s) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 42'47 mlcod 0'0 unknown pruub 170.755828857s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-mon[80009]: 9.14 scrub starts
Nov 24 09:29:16 compute-1 ceph-mon[80009]: 9.14 scrub ok
Nov 24 09:29:16 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:29:16 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:29:16 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:29:16 compute-1 ceph-mon[80009]: osdmap e56: 3 total, 3 up, 3 in
Nov 24 09:29:16 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:16 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:16 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:16 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:16 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:16 compute-1 ceph-mon[80009]: Regenerating cephadm self-signed grafana TLS certificates
Nov 24 09:29:16 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:16 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:16 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Nov 24 09:29:16 compute-1 ceph-mon[80009]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Nov 24 09:29:16 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:16 compute-1 ceph-mon[80009]: Deploying daemon grafana.compute-0 on compute-0
Nov 24 09:29:16 compute-1 ceph-mon[80009]: 9.17 scrub starts
Nov 24 09:29:16 compute-1 ceph-mon[80009]: 9.17 scrub ok
Nov 24 09:29:16 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e57 e57: 3 total, 3 up, 3 in
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.7( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1b( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.10( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.12( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1f( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.11( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1e( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1d( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1c( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1a( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.19( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.18( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.6( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.5( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.4( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.8( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.b( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.d( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.3( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.9( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.a( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.c( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.f( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.e( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1( v 42'48 (0'0,42'48] local-lis/les=41/42 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.2( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.13( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.14( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.15( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.16( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.17( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1b( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1f( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1e( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.10( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1c( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1a( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.19( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.12( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1d( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.11( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.6( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.5( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.4( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.18( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.8( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.3( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.7( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.a( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.d( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.0( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 42'47 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.e( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.c( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.9( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.b( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.f( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.16( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.13( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.14( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.15( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.17( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.2( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 24 09:29:16 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 24 09:29:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:17 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:29:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:17 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:29:17 compute-1 ceph-mon[80009]: osdmap e57: 3 total, 3 up, 3 in
Nov 24 09:29:17 compute-1 ceph-mon[80009]: pgmap v46: 322 pgs: 124 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:29:17 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:17 compute-1 ceph-mon[80009]: 10.1b scrub starts
Nov 24 09:29:17 compute-1 ceph-mon[80009]: 10.1b scrub ok
Nov 24 09:29:17 compute-1 ceph-mon[80009]: 8.16 scrub starts
Nov 24 09:29:17 compute-1 ceph-mon[80009]: 8.16 scrub ok
Nov 24 09:29:17 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e58 e58: 3 total, 3 up, 3 in
Nov 24 09:29:17 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 58 pg[12.0( v 57'56 (0'0,57'56] local-lis/les=49/50 n=8 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=10.413920403s) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 57'55 mlcod 57'55 active pruub 174.163131714s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:17 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 58 pg[12.0( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=10.413920403s) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 57'55 mlcod 0'0 unknown pruub 174.163131714s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:17 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Nov 24 09:29:17 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Nov 24 09:29:17 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:29:18 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:29:18 compute-1 ceph-mon[80009]: osdmap e58: 3 total, 3 up, 3 in
Nov 24 09:29:18 compute-1 ceph-mon[80009]: 10.1e scrub starts
Nov 24 09:29:18 compute-1 ceph-mon[80009]: 10.1e scrub ok
Nov 24 09:29:18 compute-1 ceph-mon[80009]: 8.14 scrub starts
Nov 24 09:29:18 compute-1 ceph-mon[80009]: 8.14 scrub ok
Nov 24 09:29:18 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e59 e59: 3 total, 3 up, 3 in
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.11( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.10( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.13( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.15( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.4( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.7( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.6( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.9( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.8( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.12( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.a( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.c( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.f( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.e( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.b( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.d( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.5( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.2( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.3( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1f( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1e( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1c( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1a( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1b( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.19( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.18( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.16( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.17( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.14( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1d( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1( v 57'56 (0'0,57'56] local-lis/les=49/50 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.11( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.10( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.15( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.13( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.4( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.6( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.7( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.9( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.8( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.a( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.f( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.c( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.e( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.d( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.b( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.5( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.2( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.3( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.0( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 57'55 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1f( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.12( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1a( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1b( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1c( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1e( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.18( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.19( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.16( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.17( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1d( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.14( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:18 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 24 09:29:18 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 24 09:29:19 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.10 deep-scrub starts
Nov 24 09:29:19 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.10 deep-scrub ok
Nov 24 09:29:19 compute-1 ceph-mon[80009]: osdmap e59: 3 total, 3 up, 3 in
Nov 24 09:29:19 compute-1 ceph-mon[80009]: pgmap v49: 353 pgs: 31 unknown, 322 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:29:19 compute-1 ceph-mon[80009]: 10.1c scrub starts
Nov 24 09:29:19 compute-1 ceph-mon[80009]: 10.1c scrub ok
Nov 24 09:29:19 compute-1 ceph-mon[80009]: 9.11 scrub starts
Nov 24 09:29:19 compute-1 ceph-mon[80009]: 9.11 scrub ok
Nov 24 09:29:20 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 24 09:29:20 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 24 09:29:20 compute-1 ceph-mon[80009]: 10.10 deep-scrub starts
Nov 24 09:29:20 compute-1 ceph-mon[80009]: 10.10 deep-scrub ok
Nov 24 09:29:20 compute-1 ceph-mon[80009]: 8.15 scrub starts
Nov 24 09:29:20 compute-1 ceph-mon[80009]: 8.15 scrub ok
Nov 24 09:29:20 compute-1 ceph-mon[80009]: 10.12 scrub starts
Nov 24 09:29:20 compute-1 ceph-mon[80009]: 10.12 scrub ok
Nov 24 09:29:20 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:21 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Nov 24 09:29:21 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Nov 24 09:29:21 compute-1 ceph-mon[80009]: pgmap v50: 353 pgs: 31 unknown, 322 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:29:21 compute-1 ceph-mon[80009]: 8.10 scrub starts
Nov 24 09:29:21 compute-1 ceph-mon[80009]: 8.10 scrub ok
Nov 24 09:29:21 compute-1 ceph-mon[80009]: 10.1d scrub starts
Nov 24 09:29:21 compute-1 ceph-mon[80009]: 10.1d scrub ok
Nov 24 09:29:22 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Nov 24 09:29:22 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Nov 24 09:29:22 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:29:22 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e60 e60: 3 total, 3 up, 3 in
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.11( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.422721863s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.777389526s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.10( v 59'59 (0'0,59'59] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.425310135s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=59'57 lcod 59'58 mlcod 59'58 active pruub 180.779983521s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.11( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.422682762s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.777389526s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.10( v 59'59 (0'0,59'59] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.425265312s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=59'57 lcod 59'58 mlcod 0'0 unknown NOTIFY pruub 180.779983521s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.15( v 59'51 (0'0,59'51] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.341501236s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 59'50 mlcod 59'50 active pruub 178.696243286s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.15( v 59'51 (0'0,59'51] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.341470718s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 59'50 mlcod 0'0 unknown NOTIFY pruub 178.696243286s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.13( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.425135612s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780044556s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.13( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.425113678s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780044556s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.14( v 59'51 (0'0,59'51] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.341238976s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 59'50 mlcod 59'50 active pruub 178.696228027s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.12( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.425786018s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780792236s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.14( v 59'51 (0'0,59'51] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.341207504s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 59'50 mlcod 0'0 unknown NOTIFY pruub 178.696228027s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.12( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.425769806s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780792236s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.13( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.341078758s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.696151733s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.4( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424939156s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780136108s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.13( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.341030121s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.696151733s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.4( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424919128s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780136108s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.2( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.340937614s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.696228027s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.2( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.340916634s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.696228027s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.7( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424738884s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780151367s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.7( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424723625s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780151367s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.6( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424711227s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780151367s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.6( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424695969s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780151367s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.f( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.340555191s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.696151733s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.f( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.340542793s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.696151733s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.9( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424826622s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780471802s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.9( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424814224s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780471802s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.8( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424716949s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780517578s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.8( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424701691s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780517578s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.a( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424571037s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780532837s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.c( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424633980s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780593872s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.a( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424552917s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780532837s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.1( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.340095520s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.696090698s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.c( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424615860s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780593872s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.1( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.340083122s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.696090698s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.b( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424512863s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780670166s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.8( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.339606285s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.695755005s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.e( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424450874s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780624390s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.e( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424436569s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780624390s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.8( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.339557648s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.695755005s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.b( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424476624s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780670166s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.3( v 59'51 (0'0,59'51] local-lis/les=56/57 n=1 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.339503288s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 59'50 mlcod 59'50 active pruub 178.695785522s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.3( v 59'51 (0'0,59'51] local-lis/les=56/57 n=1 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.339451790s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 59'50 mlcod 0'0 unknown NOTIFY pruub 178.695785522s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.2( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424294472s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780670166s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.4( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.339303017s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.695693970s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.3( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424245834s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780700684s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.4( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.339271545s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.695693970s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.3( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424232483s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780700684s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.5( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.338710785s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.695205688s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.1e( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424853325s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.781356812s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.2( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424114227s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780670166s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.1e( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424824715s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.781356812s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.5( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.338671684s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.695205688s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.18( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.339028358s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.695739746s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.19( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.338397026s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.695144653s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.18( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.339013100s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.695739746s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.19( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.338379860s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.695144653s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.1c( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424505234s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.781356812s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.1c( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424476624s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.781356812s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.1a( v 59'59 (0'0,59'59] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.423853874s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=59'57 lcod 59'58 mlcod 59'58 active pruub 180.780807495s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.1e( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.338031769s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.695007324s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.19( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424389839s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.781433105s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.1e( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.337989807s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.695007324s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.19( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424373627s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.781433105s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.1a( v 59'59 (0'0,59'59] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.423757553s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=59'57 lcod 59'58 mlcod 0'0 unknown NOTIFY pruub 180.780807495s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.10( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.337868690s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.695007324s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.10( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.337856293s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.695007324s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.11( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.337847710s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.695022583s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.11( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.337830544s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.695022583s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.17( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424319267s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.781524658s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.12( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.337912560s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.695159912s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.12( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.337898254s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.695159912s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.17( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424275398s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.781524658s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.18( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424061775s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.781417847s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.1d( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424158096s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.781539917s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.1b( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.332690239s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.690093994s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.1d( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424140930s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.781539917s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.18( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424025536s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.781417847s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.1b( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.332671165s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.690093994s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[8.14( empty local-lis/les=0/0 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[11.14( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[8.17( empty local-lis/les=0/0 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[8.10( empty local-lis/les=0/0 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[11.12( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[11.1( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[8.8( empty local-lis/les=0/0 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[11.f( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[11.4( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-1 ceph-mon[80009]: 9.10 scrub starts
Nov 24 09:29:22 compute-1 ceph-mon[80009]: 9.10 scrub ok
Nov 24 09:29:22 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:22 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:22 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:22 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 24 09:29:22 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:22 compute-1 ceph-mon[80009]: 10.5 scrub starts
Nov 24 09:29:22 compute-1 ceph-mon[80009]: 10.5 scrub ok
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[11.5( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[11.7( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[8.1b( empty local-lis/les=0/0 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[11.1a( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[8.19( empty local-lis/les=0/0 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[8.4( empty local-lis/les=0/0 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[11.1b( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[8.18( empty local-lis/les=0/0 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[11.1c( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[11.1d( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[11.1e( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[8.12( empty local-lis/les=0/0 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:23 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 24 09:29:23 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 24 09:29:23 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e61 e61: 3 total, 3 up, 3 in
Nov 24 09:29:23 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[11.14( empty local-lis/les=60/61 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[8.17( v 38'12 (0'0,38'12] local-lis/les=60/61 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[8.14( v 38'12 lc 38'2 (0'0,38'12] local-lis/les=60/61 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[11.12( empty local-lis/les=60/61 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[11.1( empty local-lis/les=60/61 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[11.f( empty local-lis/les=60/61 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[11.5( empty local-lis/les=60/61 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[11.4( empty local-lis/les=60/61 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[8.8( v 38'12 (0'0,38'12] local-lis/les=60/61 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[11.7( empty local-lis/les=60/61 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[8.18( v 38'12 (0'0,38'12] local-lis/les=60/61 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[11.1b( empty local-lis/les=60/61 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[11.1d( empty local-lis/les=60/61 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[11.1c( empty local-lis/les=60/61 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[11.1e( empty local-lis/les=60/61 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[8.1b( v 38'12 (0'0,38'12] local-lis/les=60/61 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[8.4( v 38'12 (0'0,38'12] local-lis/les=60/61 n=1 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[8.12( v 38'12 (0'0,38'12] local-lis/les=60/61 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[8.10( v 38'12 (0'0,38'12] local-lis/les=60/61 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[11.1a( empty local-lis/les=60/61 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[8.19( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=60/61 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=38'12 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-1 ceph-mon[80009]: pgmap v51: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:29:23 compute-1 ceph-mon[80009]: 8.17 scrub starts
Nov 24 09:29:23 compute-1 ceph-mon[80009]: 8.17 scrub ok
Nov 24 09:29:23 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:29:23 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:29:23 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:29:23 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 24 09:29:23 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:29:23 compute-1 ceph-mon[80009]: osdmap e60: 3 total, 3 up, 3 in
Nov 24 09:29:23 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:23 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:23 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:23 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:23 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:23 compute-1 ceph-mon[80009]: osdmap e61: 3 total, 3 up, 3 in
Nov 24 09:29:24 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Nov 24 09:29:24 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Nov 24 09:29:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e62 e62: 3 total, 3 up, 3 in
Nov 24 09:29:24 compute-1 ceph-mon[80009]: Deploying daemon haproxy.rgw.default.compute-0.fxvlbj on compute-0
Nov 24 09:29:24 compute-1 ceph-mon[80009]: 10.17 scrub starts
Nov 24 09:29:24 compute-1 ceph-mon[80009]: 10.17 scrub ok
Nov 24 09:29:24 compute-1 ceph-mon[80009]: 9.15 scrub starts
Nov 24 09:29:24 compute-1 ceph-mon[80009]: 9.15 scrub ok
Nov 24 09:29:24 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 24 09:29:25 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.15 scrub starts
Nov 24 09:29:25 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.15 scrub ok
Nov 24 09:29:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.002000050s ======
Nov 24 09:29:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:25.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Nov 24 09:29:25 compute-1 ceph-mon[80009]: pgmap v54: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:29:25 compute-1 ceph-mon[80009]: 10.16 scrub starts
Nov 24 09:29:25 compute-1 ceph-mon[80009]: 10.16 scrub ok
Nov 24 09:29:25 compute-1 ceph-mon[80009]: 11.15 scrub starts
Nov 24 09:29:25 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 24 09:29:25 compute-1 ceph-mon[80009]: 11.15 scrub ok
Nov 24 09:29:25 compute-1 ceph-mon[80009]: osdmap e62: 3 total, 3 up, 3 in
Nov 24 09:29:25 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:25 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:25 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:25 compute-1 ceph-mon[80009]: 12.1a scrub starts
Nov 24 09:29:25 compute-1 ceph-mon[80009]: 12.1a scrub ok
Nov 24 09:29:25 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:26 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.0 deep-scrub starts
Nov 24 09:29:26 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.0 deep-scrub ok
Nov 24 09:29:26 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e63 e63: 3 total, 3 up, 3 in
Nov 24 09:29:27 compute-1 ceph-mon[80009]: Deploying daemon haproxy.rgw.default.compute-2.tariiq on compute-2
Nov 24 09:29:27 compute-1 ceph-mon[80009]: 12.15 scrub starts
Nov 24 09:29:27 compute-1 ceph-mon[80009]: 12.15 scrub ok
Nov 24 09:29:27 compute-1 ceph-mon[80009]: 11.0 deep-scrub starts
Nov 24 09:29:27 compute-1 ceph-mon[80009]: 11.0 deep-scrub ok
Nov 24 09:29:27 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 24 09:29:27 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.e deep-scrub starts
Nov 24 09:29:27 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.e deep-scrub ok
Nov 24 09:29:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e64 e64: 3 total, 3 up, 3 in
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:27.798350) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976567798489, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 6941, "num_deletes": 258, "total_data_size": 18480495, "memory_usage": 19237136, "flush_reason": "Manual Compaction"}
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Nov 24 09:29:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:29:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:29:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:27.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976567854157, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 11751717, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 6946, "table_properties": {"data_size": 11725316, "index_size": 16686, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8645, "raw_key_size": 83905, "raw_average_key_size": 24, "raw_value_size": 11659115, "raw_average_value_size": 3392, "num_data_blocks": 733, "num_entries": 3437, "num_filter_entries": 3437, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 1763976422, "file_creation_time": 1763976567, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 55861 microseconds, and 22776 cpu microseconds.
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:27.854217) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 11751717 bytes OK
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:27.854241) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:27.861434) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:27.861453) EVENT_LOG_v1 {"time_micros": 1763976567861449, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:27.861471) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 18443350, prev total WAL file size 18443350, number of live WAL files 2.
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:27.865008) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323534' seq:0, type:0; will stop at (end)
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(11MB) 8(1648B)]
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976567865099, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 11753365, "oldest_snapshot_seqno": -1}
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 3183 keys, 11748211 bytes, temperature: kUnknown
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976567917121, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 11748211, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11722439, "index_size": 16702, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8005, "raw_key_size": 80386, "raw_average_key_size": 25, "raw_value_size": 11659395, "raw_average_value_size": 3663, "num_data_blocks": 732, "num_entries": 3183, "num_filter_entries": 3183, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763976567, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:27.917397) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 11748211 bytes
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:27.922558) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 225.6 rd, 225.5 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(11.2, 0.0 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3442, records dropped: 259 output_compression: NoCompression
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:27.922612) EVENT_LOG_v1 {"time_micros": 1763976567922591, "job": 4, "event": "compaction_finished", "compaction_time_micros": 52106, "compaction_time_cpu_micros": 21904, "output_level": 6, "num_output_files": 1, "total_output_size": 11748211, "num_input_records": 3442, "num_output_records": 3183, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976567925188, "job": 4, "event": "table_file_deletion", "file_number": 14}
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976567925246, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 24 09:29:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:27.864909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:29:28 compute-1 ceph-mon[80009]: pgmap v56: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 24 09:29:28 compute-1 ceph-mon[80009]: 10.0 deep-scrub starts
Nov 24 09:29:28 compute-1 ceph-mon[80009]: 10.0 deep-scrub ok
Nov 24 09:29:28 compute-1 ceph-mon[80009]: 11.c scrub starts
Nov 24 09:29:28 compute-1 ceph-mon[80009]: 11.c scrub ok
Nov 24 09:29:28 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 24 09:29:28 compute-1 ceph-mon[80009]: osdmap e63: 3 total, 3 up, 3 in
Nov 24 09:29:28 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:28 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:28 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:28 compute-1 ceph-mon[80009]: 10.e deep-scrub starts
Nov 24 09:29:28 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:28 compute-1 ceph-mon[80009]: 10.e deep-scrub ok
Nov 24 09:29:28 compute-1 ceph-mon[80009]: osdmap e64: 3 total, 3 up, 3 in
Nov 24 09:29:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.002000050s ======
Nov 24 09:29:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:28.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Nov 24 09:29:28 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 24 09:29:28 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 24 09:29:28 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e65 e65: 3 total, 3 up, 3 in
Nov 24 09:29:29 compute-1 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 24 09:29:29 compute-1 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 24 09:29:29 compute-1 ceph-mon[80009]: Deploying daemon keepalived.rgw.default.compute-2.atxclo on compute-2
Nov 24 09:29:29 compute-1 ceph-mon[80009]: 11.b scrub starts
Nov 24 09:29:29 compute-1 ceph-mon[80009]: 11.b scrub ok
Nov 24 09:29:29 compute-1 ceph-mon[80009]: pgmap v59: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s, 3 objects/s recovering
Nov 24 09:29:29 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 24 09:29:29 compute-1 ceph-mon[80009]: 10.a scrub starts
Nov 24 09:29:29 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 24 09:29:29 compute-1 ceph-mon[80009]: osdmap e65: 3 total, 3 up, 3 in
Nov 24 09:29:29 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.c deep-scrub starts
Nov 24 09:29:29 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.c deep-scrub ok
Nov 24 09:29:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e66 e66: 3 total, 3 up, 3 in
Nov 24 09:29:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:29:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:29.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:29:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:30.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:30 compute-1 ceph-mon[80009]: 10.a scrub ok
Nov 24 09:29:30 compute-1 ceph-mon[80009]: 8.c scrub starts
Nov 24 09:29:30 compute-1 ceph-mon[80009]: 8.c scrub ok
Nov 24 09:29:30 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:30 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:30 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:30 compute-1 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 24 09:29:30 compute-1 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 24 09:29:30 compute-1 ceph-mon[80009]: Deploying daemon keepalived.rgw.default.compute-0.zrpppr on compute-0
Nov 24 09:29:30 compute-1 ceph-mon[80009]: 10.c deep-scrub starts
Nov 24 09:29:30 compute-1 ceph-mon[80009]: 10.c deep-scrub ok
Nov 24 09:29:30 compute-1 ceph-mon[80009]: osdmap e66: 3 total, 3 up, 3 in
Nov 24 09:29:30 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Nov 24 09:29:30 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Nov 24 09:29:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e67 e67: 3 total, 3 up, 3 in
Nov 24 09:29:31 compute-1 ceph-mon[80009]: 11.17 scrub starts
Nov 24 09:29:31 compute-1 ceph-mon[80009]: 11.17 scrub ok
Nov 24 09:29:31 compute-1 ceph-mon[80009]: pgmap v62: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s; 294 B/s, 2 objects/s recovering
Nov 24 09:29:31 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 24 09:29:31 compute-1 ceph-mon[80009]: 10.9 scrub starts
Nov 24 09:29:31 compute-1 ceph-mon[80009]: 10.9 scrub ok
Nov 24 09:29:31 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:31 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 24 09:29:31 compute-1 ceph-mon[80009]: osdmap e67: 3 total, 3 up, 3 in
Nov 24 09:29:31 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:31 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.f scrub starts
Nov 24 09:29:31 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.f scrub ok
Nov 24 09:29:31 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e68 e68: 3 total, 3 up, 3 in
Nov 24 09:29:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:29:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:31.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:29:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:32.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:29:32 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:32 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:32 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:32 compute-1 ceph-mon[80009]: 12.17 scrub starts
Nov 24 09:29:32 compute-1 ceph-mon[80009]: 12.17 scrub ok
Nov 24 09:29:32 compute-1 ceph-mon[80009]: Deploying daemon prometheus.compute-0 on compute-0
Nov 24 09:29:32 compute-1 ceph-mon[80009]: 12.f scrub starts
Nov 24 09:29:32 compute-1 ceph-mon[80009]: osdmap e68: 3 total, 3 up, 3 in
Nov 24 09:29:32 compute-1 ceph-mon[80009]: 9.16 deep-scrub starts
Nov 24 09:29:32 compute-1 ceph-mon[80009]: 9.16 deep-scrub ok
Nov 24 09:29:32 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.d scrub starts
Nov 24 09:29:32 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.d scrub ok
Nov 24 09:29:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:29:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e69 e69: 3 total, 3 up, 3 in
Nov 24 09:29:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:33 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc000fb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:33 compute-1 ceph-mon[80009]: 12.f scrub ok
Nov 24 09:29:33 compute-1 ceph-mon[80009]: 12.9 deep-scrub starts
Nov 24 09:29:33 compute-1 ceph-mon[80009]: 12.9 deep-scrub ok
Nov 24 09:29:33 compute-1 ceph-mon[80009]: pgmap v65: 353 pgs: 4 unknown, 349 active+clean; 455 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 302 B/s, 9 objects/s recovering
Nov 24 09:29:33 compute-1 ceph-mon[80009]: 10.d scrub starts
Nov 24 09:29:33 compute-1 ceph-mon[80009]: 10.d scrub ok
Nov 24 09:29:33 compute-1 ceph-mon[80009]: 9.2 scrub starts
Nov 24 09:29:33 compute-1 ceph-mon[80009]: osdmap e69: 3 total, 3 up, 3 in
Nov 24 09:29:33 compute-1 ceph-mon[80009]: 9.2 scrub ok
Nov 24 09:29:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:33 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:33 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 24 09:29:33 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 24 09:29:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:33.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:33 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e70 e70: 3 total, 3 up, 3 in
Nov 24 09:29:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:29:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:34.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:29:34 compute-1 ceph-mon[80009]: 10.11 scrub starts
Nov 24 09:29:34 compute-1 ceph-mon[80009]: 10.11 scrub ok
Nov 24 09:29:34 compute-1 ceph-mon[80009]: 10.b scrub starts
Nov 24 09:29:34 compute-1 ceph-mon[80009]: 10.b scrub ok
Nov 24 09:29:34 compute-1 ceph-mon[80009]: 11.9 scrub starts
Nov 24 09:29:34 compute-1 ceph-mon[80009]: 11.9 scrub ok
Nov 24 09:29:34 compute-1 ceph-mon[80009]: osdmap e70: 3 total, 3 up, 3 in
Nov 24 09:29:34 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.d scrub starts
Nov 24 09:29:34 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.d scrub ok
Nov 24 09:29:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e71 e71: 3 total, 3 up, 3 in
Nov 24 09:29:34 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:34 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/092935 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:29:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:35 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:35 compute-1 ceph-mon[80009]: 11.e scrub starts
Nov 24 09:29:35 compute-1 ceph-mon[80009]: 11.e scrub ok
Nov 24 09:29:35 compute-1 ceph-mon[80009]: pgmap v68: 353 pgs: 4 unknown, 349 active+clean; 455 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 302 B/s, 9 objects/s recovering
Nov 24 09:29:35 compute-1 ceph-mon[80009]: 12.d scrub starts
Nov 24 09:29:35 compute-1 ceph-mon[80009]: 12.d scrub ok
Nov 24 09:29:35 compute-1 ceph-mon[80009]: 11.d scrub starts
Nov 24 09:29:35 compute-1 ceph-mon[80009]: 11.d scrub ok
Nov 24 09:29:35 compute-1 ceph-mon[80009]: osdmap e71: 3 total, 3 up, 3 in
Nov 24 09:29:35 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.5 scrub starts
Nov 24 09:29:35 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.5 scrub ok
Nov 24 09:29:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:35 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:35.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:36.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:36 compute-1 ceph-mon[80009]: 10.1 scrub starts
Nov 24 09:29:36 compute-1 ceph-mon[80009]: 10.1 scrub ok
Nov 24 09:29:36 compute-1 ceph-mon[80009]: 12.5 scrub starts
Nov 24 09:29:36 compute-1 ceph-mon[80009]: 12.5 scrub ok
Nov 24 09:29:36 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:36 compute-1 ceph-mon[80009]: 8.e deep-scrub starts
Nov 24 09:29:36 compute-1 ceph-mon[80009]: 8.e deep-scrub ok
Nov 24 09:29:36 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.0 scrub starts
Nov 24 09:29:36 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.0 scrub ok
Nov 24 09:29:36 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:36 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:37 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:37 compute-1 ceph-mon[80009]: 10.f scrub starts
Nov 24 09:29:37 compute-1 ceph-mon[80009]: 10.f scrub ok
Nov 24 09:29:37 compute-1 ceph-mon[80009]: pgmap v70: 353 pgs: 4 unknown, 349 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:29:37 compute-1 ceph-mon[80009]: 12.0 scrub starts
Nov 24 09:29:37 compute-1 ceph-mon[80009]: 12.0 scrub ok
Nov 24 09:29:37 compute-1 ceph-mon[80009]: 11.2 deep-scrub starts
Nov 24 09:29:37 compute-1 ceph-mon[80009]: 11.2 deep-scrub ok
Nov 24 09:29:37 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 24 09:29:37 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 24 09:29:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:37 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:37 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:29:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:29:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:37.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:29:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:38.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:38 compute-1 ceph-mon[80009]: 11.a deep-scrub starts
Nov 24 09:29:38 compute-1 ceph-mon[80009]: 11.a deep-scrub ok
Nov 24 09:29:38 compute-1 ceph-mon[80009]: 10.6 scrub starts
Nov 24 09:29:38 compute-1 ceph-mon[80009]: 10.6 scrub ok
Nov 24 09:29:38 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:38 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:38 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:38 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Nov 24 09:29:38 compute-1 ceph-mon[80009]: 8.1 deep-scrub starts
Nov 24 09:29:38 compute-1 ceph-mon[80009]: 8.1 deep-scrub ok
Nov 24 09:29:38 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.1f scrub starts
Nov 24 09:29:38 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.1f scrub ok
Nov 24 09:29:38 compute-1 sshd-session[83450]: Connection closed by 192.168.122.100 port 57780
Nov 24 09:29:38 compute-1 sshd-session[83431]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:29:38 compute-1 systemd[1]: session-34.scope: Deactivated successfully.
Nov 24 09:29:38 compute-1 systemd[1]: session-34.scope: Consumed 17.642s CPU time.
Nov 24 09:29:38 compute-1 systemd-logind[823]: Session 34 logged out. Waiting for processes to exit.
Nov 24 09:29:38 compute-1 systemd-logind[823]: Removed session 34.
Nov 24 09:29:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: ignoring --setuser ceph since I am not root
Nov 24 09:29:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: ignoring --setgroup ceph since I am not root
Nov 24 09:29:38 compute-1 ceph-mgr[80316]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 24 09:29:38 compute-1 ceph-mgr[80316]: pidfile_write: ignore empty --pid-file
Nov 24 09:29:38 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'alerts'
Nov 24 09:29:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:38.875+0000 7fd8715da140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 09:29:38 compute-1 ceph-mgr[80316]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 09:29:38 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'balancer'
Nov 24 09:29:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:38 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:38.961+0000 7fd8715da140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 09:29:38 compute-1 ceph-mgr[80316]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 09:29:38 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'cephadm'
Nov 24 09:29:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:39 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae80016a0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:39 compute-1 ceph-mon[80009]: 12.11 scrub starts
Nov 24 09:29:39 compute-1 ceph-mon[80009]: 12.11 scrub ok
Nov 24 09:29:39 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 24 09:29:39 compute-1 ceph-mon[80009]: 12.1f scrub starts
Nov 24 09:29:39 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Nov 24 09:29:39 compute-1 ceph-mon[80009]: mgrmap e26: compute-0.mauvni(active, since 88s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:29:39 compute-1 ceph-mon[80009]: 8.0 scrub starts
Nov 24 09:29:39 compute-1 ceph-mon[80009]: 8.0 scrub ok
Nov 24 09:29:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e72 e72: 3 total, 3 up, 3 in
Nov 24 09:29:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:39 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:39 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 72 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=72) [1] r=0 lpr=72 pi=[54,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:39 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 72 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=72) [1] r=0 lpr=72 pi=[54,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:39 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 72 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=72) [1] r=0 lpr=72 pi=[54,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:39 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 72 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=72) [1] r=0 lpr=72 pi=[54,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.559461) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976579559488, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 685, "num_deletes": 251, "total_data_size": 1402688, "memory_usage": 1461600, "flush_reason": "Manual Compaction"}
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976579576752, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 909671, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6951, "largest_seqno": 7631, "table_properties": {"data_size": 906015, "index_size": 1372, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9601, "raw_average_key_size": 20, "raw_value_size": 898100, "raw_average_value_size": 1906, "num_data_blocks": 61, "num_entries": 471, "num_filter_entries": 471, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976568, "oldest_key_time": 1763976568, "file_creation_time": 1763976579, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 17328 microseconds, and 3201 cpu microseconds.
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.576786) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 909671 bytes OK
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.576805) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.579779) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.579794) EVENT_LOG_v1 {"time_micros": 1763976579579790, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.579809) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 1398662, prev total WAL file size 1398662, number of live WAL files 2.
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.580344) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(888KB)], [15(11MB)]
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976579580372, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 12657882, "oldest_snapshot_seqno": -1}
Nov 24 09:29:39 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 24 09:29:39 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 24 09:29:39 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'crash'
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 3129 keys, 11433137 bytes, temperature: kUnknown
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976579712452, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 11433137, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11408170, "index_size": 16026, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7877, "raw_key_size": 80873, "raw_average_key_size": 25, "raw_value_size": 11346347, "raw_average_value_size": 3626, "num_data_blocks": 696, "num_entries": 3129, "num_filter_entries": 3129, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763976579, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.712692) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 11433137 bytes
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.745477) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 95.8 rd, 86.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.2 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(26.5) write-amplify(12.6) OK, records in: 3654, records dropped: 525 output_compression: NoCompression
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.745516) EVENT_LOG_v1 {"time_micros": 1763976579745501, "job": 6, "event": "compaction_finished", "compaction_time_micros": 132164, "compaction_time_cpu_micros": 21930, "output_level": 6, "num_output_files": 1, "total_output_size": 11433137, "num_input_records": 3654, "num_output_records": 3129, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976579745794, "job": 6, "event": "table_file_deletion", "file_number": 17}
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976579747530, "job": 6, "event": "table_file_deletion", "file_number": 15}
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.580273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.747589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.747598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.747601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.747604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:29:39 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.747606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:29:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:39.782+0000 7fd8715da140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 09:29:39 compute-1 ceph-mgr[80316]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 09:29:39 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'dashboard'
Nov 24 09:29:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:29:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:39.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:29:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:40.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:40 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'devicehealth'
Nov 24 09:29:40 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:40.384+0000 7fd8715da140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 09:29:40 compute-1 ceph-mgr[80316]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 09:29:40 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'diskprediction_local'
Nov 24 09:29:40 compute-1 ceph-mon[80009]: 12.1f scrub ok
Nov 24 09:29:40 compute-1 ceph-mon[80009]: 12.13 scrub starts
Nov 24 09:29:40 compute-1 ceph-mon[80009]: 12.13 scrub ok
Nov 24 09:29:40 compute-1 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 24 09:29:40 compute-1 ceph-mon[80009]: osdmap e72: 3 total, 3 up, 3 in
Nov 24 09:29:40 compute-1 ceph-mon[80009]: 8.7 scrub starts
Nov 24 09:29:40 compute-1 ceph-mon[80009]: 8.7 scrub ok
Nov 24 09:29:40 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 24 09:29:40 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 24 09:29:40 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]:   from numpy import show_config as show_numpy_config
Nov 24 09:29:40 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:40.552+0000 7fd8715da140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 09:29:40 compute-1 ceph-mgr[80316]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 09:29:40 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'influx'
Nov 24 09:29:40 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e73 e73: 3 total, 3 up, 3 in
Nov 24 09:29:40 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 73 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] r=-1 lpr=73 pi=[54,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:40 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 73 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] r=-1 lpr=73 pi=[54,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:40 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 73 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] r=-1 lpr=73 pi=[54,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:40 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 73 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] r=-1 lpr=73 pi=[54,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:40 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 73 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] r=-1 lpr=73 pi=[54,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:40 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 73 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] r=-1 lpr=73 pi=[54,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:40 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 73 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] r=-1 lpr=73 pi=[54,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:40 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 73 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] r=-1 lpr=73 pi=[54,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:40 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.1b scrub starts
Nov 24 09:29:40 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.1b scrub ok
Nov 24 09:29:40 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:40.622+0000 7fd8715da140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 09:29:40 compute-1 ceph-mgr[80316]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 09:29:40 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'insights'
Nov 24 09:29:40 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'iostat'
Nov 24 09:29:40 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:40.763+0000 7fd8715da140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 09:29:40 compute-1 ceph-mgr[80316]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 09:29:40 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'k8sevents'
Nov 24 09:29:40 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:40 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14000df0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:41 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf00023e0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:41 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'localpool'
Nov 24 09:29:41 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'mds_autoscaler'
Nov 24 09:29:41 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'mirroring'
Nov 24 09:29:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:41 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae80016a0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:41 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'nfs'
Nov 24 09:29:41 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Nov 24 09:29:41 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Nov 24 09:29:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:41.788+0000 7fd8715da140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 09:29:41 compute-1 ceph-mgr[80316]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 09:29:41 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'orchestrator'
Nov 24 09:29:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:29:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:41.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:29:41 compute-1 ceph-mon[80009]: 10.1a scrub starts
Nov 24 09:29:41 compute-1 ceph-mon[80009]: 10.1a scrub ok
Nov 24 09:29:41 compute-1 ceph-mon[80009]: 8.9 scrub starts
Nov 24 09:29:41 compute-1 ceph-mon[80009]: 8.9 scrub ok
Nov 24 09:29:41 compute-1 ceph-mon[80009]: osdmap e73: 3 total, 3 up, 3 in
Nov 24 09:29:41 compute-1 ceph-mon[80009]: 11.6 scrub starts
Nov 24 09:29:41 compute-1 ceph-mon[80009]: 11.6 scrub ok
Nov 24 09:29:41 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e74 e74: 3 total, 3 up, 3 in
Nov 24 09:29:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:42.010+0000 7fd8715da140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-1 ceph-mgr[80316]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'osd_perf_query'
Nov 24 09:29:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:42.088+0000 7fd8715da140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-1 ceph-mgr[80316]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'osd_support'
Nov 24 09:29:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:42.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:42.159+0000 7fd8715da140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-1 ceph-mgr[80316]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'pg_autoscaler'
Nov 24 09:29:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:42.241+0000 7fd8715da140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-1 ceph-mgr[80316]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'progress'
Nov 24 09:29:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:42.312+0000 7fd8715da140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-1 ceph-mgr[80316]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'prometheus'
Nov 24 09:29:42 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.16 scrub starts
Nov 24 09:29:42 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.16 scrub ok
Nov 24 09:29:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:42.666+0000 7fd8715da140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-1 ceph-mgr[80316]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'rbd_support'
Nov 24 09:29:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:42.772+0000 7fd8715da140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-1 ceph-mgr[80316]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'restful'
Nov 24 09:29:42 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e75 e75: 3 total, 3 up, 3 in
Nov 24 09:29:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 75 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=4 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75) [1] r=0 lpr=75 pi=[54,75)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 75 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=4 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75) [1] r=0 lpr=75 pi=[54,75)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 75 pg[9.e( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=6 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75) [1] r=0 lpr=75 pi=[54,75)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 75 pg[9.e( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=6 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75) [1] r=0 lpr=75 pi=[54,75)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 75 pg[9.6( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=6 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75) [1] r=0 lpr=75 pi=[54,75)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 75 pg[9.6( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=6 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75) [1] r=0 lpr=75 pi=[54,75)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 75 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=5 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75) [1] r=0 lpr=75 pi=[54,75)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 75 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=5 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75) [1] r=0 lpr=75 pi=[54,75)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:42 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:29:42 compute-1 ceph-mon[80009]: 12.1b scrub starts
Nov 24 09:29:42 compute-1 ceph-mon[80009]: 12.1b scrub ok
Nov 24 09:29:42 compute-1 ceph-mon[80009]: 11.16 scrub starts
Nov 24 09:29:42 compute-1 ceph-mon[80009]: 11.16 scrub ok
Nov 24 09:29:42 compute-1 ceph-mon[80009]: 10.1f scrub starts
Nov 24 09:29:42 compute-1 ceph-mon[80009]: 10.1f scrub ok
Nov 24 09:29:42 compute-1 ceph-mon[80009]: 11.18 scrub starts
Nov 24 09:29:42 compute-1 ceph-mon[80009]: 11.18 scrub ok
Nov 24 09:29:42 compute-1 ceph-mon[80009]: osdmap e74: 3 total, 3 up, 3 in
Nov 24 09:29:42 compute-1 ceph-mon[80009]: 8.a scrub starts
Nov 24 09:29:42 compute-1 ceph-mon[80009]: 8.a scrub ok
Nov 24 09:29:42 compute-1 ceph-mon[80009]: osdmap e75: 3 total, 3 up, 3 in
Nov 24 09:29:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:42 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf00023e0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:42 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'rgw'
Nov 24 09:29:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:43 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:43.207+0000 7fd8715da140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 09:29:43 compute-1 ceph-mgr[80316]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 09:29:43 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'rook'
Nov 24 09:29:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:43 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb140091b0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:43 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.14 scrub starts
Nov 24 09:29:43 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.14 scrub ok
Nov 24 09:29:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:43.771+0000 7fd8715da140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 09:29:43 compute-1 ceph-mgr[80316]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 09:29:43 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'selftest'
Nov 24 09:29:43 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e76 e76: 3 total, 3 up, 3 in
Nov 24 09:29:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 76 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=75/76 n=4 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75) [1] r=0 lpr=75 pi=[54,75)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 76 pg[9.e( v 45'1130 (0'0,45'1130] local-lis/les=75/76 n=6 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75) [1] r=0 lpr=75 pi=[54,75)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 76 pg[9.6( v 45'1130 (0'0,45'1130] local-lis/les=75/76 n=6 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75) [1] r=0 lpr=75 pi=[54,75)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 76 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=75/76 n=5 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75) [1] r=0 lpr=75 pi=[54,75)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:43.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:43.844+0000 7fd8715da140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 09:29:43 compute-1 ceph-mgr[80316]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 09:29:43 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'snap_schedule'
Nov 24 09:29:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:43.924+0000 7fd8715da140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 09:29:43 compute-1 ceph-mgr[80316]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 09:29:43 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'stats'
Nov 24 09:29:43 compute-1 ceph-mon[80009]: 12.16 scrub starts
Nov 24 09:29:43 compute-1 ceph-mon[80009]: 12.16 scrub ok
Nov 24 09:29:43 compute-1 ceph-mon[80009]: 8.1a scrub starts
Nov 24 09:29:43 compute-1 ceph-mon[80009]: 8.1a scrub ok
Nov 24 09:29:43 compute-1 ceph-mon[80009]: 8.d deep-scrub starts
Nov 24 09:29:43 compute-1 ceph-mon[80009]: 8.d deep-scrub ok
Nov 24 09:29:43 compute-1 ceph-mon[80009]: 12.14 scrub starts
Nov 24 09:29:43 compute-1 ceph-mon[80009]: osdmap e76: 3 total, 3 up, 3 in
Nov 24 09:29:43 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'status'
Nov 24 09:29:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:44.074+0000 7fd8715da140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'telegraf'
Nov 24 09:29:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:44.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:44.149+0000 7fd8715da140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'telemetry'
Nov 24 09:29:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:44.311+0000 7fd8715da140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'test_orchestrator'
Nov 24 09:29:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:44.526+0000 7fd8715da140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'volumes'
Nov 24 09:29:44 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.7 deep-scrub starts
Nov 24 09:29:44 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.7 deep-scrub ok
Nov 24 09:29:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:44.793+0000 7fd8715da140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: mgr[py] Loading python module 'zabbix'
Nov 24 09:29:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:44.864+0000 7fd8715da140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: mgr load Constructed class from module: dashboard
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: mgr load Constructed class from module: prometheus
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: [prometheus INFO root] server_addr: :: server_port: 9283
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: [prometheus INFO root] Starting engine...
Nov 24 09:29:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: [24/Nov/2025:09:29:44] ENGINE Bus STARTING
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: [dashboard INFO root] Starting engine...
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: [prometheus INFO cherrypy.error] [24/Nov/2025:09:29:44] ENGINE Bus STARTING
Nov 24 09:29:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: CherryPy Checker:
Nov 24 09:29:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: The Application mounted at '' has an empty config.
Nov 24 09:29:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: ms_deliver_dispatch: unhandled message 0x557960f73860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Nov 24 09:29:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:44 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8002b10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: [dashboard INFO root] Engine started...
Nov 24 09:29:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: [24/Nov/2025:09:29:44] ENGINE Serving on http://:::9283
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: [prometheus INFO cherrypy.error] [24/Nov/2025:09:29:44] ENGINE Serving on http://:::9283
Nov 24 09:29:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: [24/Nov/2025:09:29:44] ENGINE Bus STARTED
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: [prometheus INFO cherrypy.error] [24/Nov/2025:09:29:44] ENGINE Bus STARTED
Nov 24 09:29:44 compute-1 ceph-mgr[80316]: [prometheus INFO root] Engine started.
Nov 24 09:29:45 compute-1 ceph-mon[80009]: 12.14 scrub ok
Nov 24 09:29:45 compute-1 ceph-mon[80009]: 8.1e deep-scrub starts
Nov 24 09:29:45 compute-1 ceph-mon[80009]: 8.1e deep-scrub ok
Nov 24 09:29:45 compute-1 ceph-mon[80009]: 11.13 deep-scrub starts
Nov 24 09:29:45 compute-1 ceph-mon[80009]: 11.13 deep-scrub ok
Nov 24 09:29:45 compute-1 ceph-mon[80009]: 10.7 deep-scrub starts
Nov 24 09:29:45 compute-1 ceph-mon[80009]: 10.7 deep-scrub ok
Nov 24 09:29:45 compute-1 ceph-mon[80009]: Standby manager daemon compute-2.rzcnzg restarted
Nov 24 09:29:45 compute-1 ceph-mon[80009]: Standby manager daemon compute-2.rzcnzg started
Nov 24 09:29:45 compute-1 ceph-mon[80009]: Standby manager daemon compute-1.qelqsg restarted
Nov 24 09:29:45 compute-1 ceph-mon[80009]: Standby manager daemon compute-1.qelqsg started
Nov 24 09:29:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:45 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf00023e0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e77 e77: 3 total, 3 up, 3 in
Nov 24 09:29:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 24 09:29:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 09:29:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 24 09:29:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:29:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 24 09:29:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:29:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.cibmfe"} v 0)
Nov 24 09:29:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.cibmfe"}]: dispatch
Nov 24 09:29:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).mds e10 all = 0
Nov 24 09:29:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.bbilht"} v 0)
Nov 24 09:29:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.bbilht"}]: dispatch
Nov 24 09:29:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).mds e10 all = 0
Nov 24 09:29:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.vpamdk"} v 0)
Nov 24 09:29:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.vpamdk"}]: dispatch
Nov 24 09:29:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).mds e10 all = 0
Nov 24 09:29:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.mauvni", "id": "compute-0.mauvni"} v 0)
Nov 24 09:29:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mauvni", "id": "compute-0.mauvni"}]: dispatch
Nov 24 09:29:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.rzcnzg", "id": "compute-2.rzcnzg"} v 0)
Nov 24 09:29:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-2.rzcnzg", "id": "compute-2.rzcnzg"}]: dispatch
Nov 24 09:29:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.qelqsg", "id": "compute-1.qelqsg"} v 0)
Nov 24 09:29:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-1.qelqsg", "id": "compute-1.qelqsg"}]: dispatch
Nov 24 09:29:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 24 09:29:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:29:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 24 09:29:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:29:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 24 09:29:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:29:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Nov 24 09:29:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 09:29:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).mds e10 all = 1
Nov 24 09:29:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 24 09:29:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 09:29:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Nov 24 09:29:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 09:29:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Nov 24 09:29:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:29:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:29:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"} v 0)
Nov 24 09:29:45 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"}]: dispatch
Nov 24 09:29:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"} v 0)
Nov 24 09:29:45 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"}]: dispatch
Nov 24 09:29:45 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.1 scrub starts
Nov 24 09:29:45 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.1 scrub ok
Nov 24 09:29:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:45 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:45 compute-1 sshd-session[86587]: Accepted publickey for ceph-admin from 192.168.122.100 port 47292 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:29:45 compute-1 systemd-logind[823]: New session 36 of user ceph-admin.
Nov 24 09:29:45 compute-1 systemd[1]: Started Session 36 of User ceph-admin.
Nov 24 09:29:45 compute-1 sshd-session[86587]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:29:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:45.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:45 compute-1 sudo[86592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:29:45 compute-1 sudo[86592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:45 compute-1 sudo[86592]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:45 compute-1 sudo[86617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 24 09:29:45 compute-1 sudo[86617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:46 compute-1 ceph-mon[80009]: 11.1f scrub starts
Nov 24 09:29:46 compute-1 ceph-mon[80009]: 11.1f scrub ok
Nov 24 09:29:46 compute-1 ceph-mon[80009]: mgrmap e27: compute-0.mauvni(active, since 94s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:29:46 compute-1 ceph-mon[80009]: Active manager daemon compute-0.mauvni restarted
Nov 24 09:29:46 compute-1 ceph-mon[80009]: Activating manager daemon compute-0.mauvni
Nov 24 09:29:46 compute-1 ceph-mon[80009]: osdmap e77: 3 total, 3 up, 3 in
Nov 24 09:29:46 compute-1 ceph-mon[80009]: mgrmap e28: compute-0.mauvni(active, starting, since 0.0598706s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:29:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 09:29:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:29:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:29:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.cibmfe"}]: dispatch
Nov 24 09:29:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.bbilht"}]: dispatch
Nov 24 09:29:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.vpamdk"}]: dispatch
Nov 24 09:29:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mauvni", "id": "compute-0.mauvni"}]: dispatch
Nov 24 09:29:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-2.rzcnzg", "id": "compute-2.rzcnzg"}]: dispatch
Nov 24 09:29:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-1.qelqsg", "id": "compute-1.qelqsg"}]: dispatch
Nov 24 09:29:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:29:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:29:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:29:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 09:29:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 09:29:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 09:29:46 compute-1 ceph-mon[80009]: Manager daemon compute-0.mauvni is now available
Nov 24 09:29:46 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:29:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"}]: dispatch
Nov 24 09:29:46 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"}]: dispatch
Nov 24 09:29:46 compute-1 ceph-mon[80009]: 8.2 scrub starts
Nov 24 09:29:46 compute-1 ceph-mon[80009]: 8.2 scrub ok
Nov 24 09:29:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"}]: dispatch
Nov 24 09:29:46 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"}]: dispatch
Nov 24 09:29:46 compute-1 ceph-mon[80009]: 12.1 scrub starts
Nov 24 09:29:46 compute-1 ceph-mon[80009]: 12.1 scrub ok
Nov 24 09:29:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:46.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:46 compute-1 podman[86710]: 2025-11-24 09:29:46.499255353 +0000 UTC m=+0.056927709 container exec fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 09:29:46 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.14 deep-scrub starts
Nov 24 09:29:46 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.14 deep-scrub ok
Nov 24 09:29:46 compute-1 podman[86710]: 2025-11-24 09:29:46.594702538 +0000 UTC m=+0.152374894 container exec_died fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:29:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:46 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:47 compute-1 podman[86847]: 2025-11-24 09:29:47.058463627 +0000 UTC m=+0.050147566 container exec 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:47 compute-1 ceph-mon[80009]: 11.10 deep-scrub starts
Nov 24 09:29:47 compute-1 ceph-mon[80009]: 11.10 deep-scrub ok
Nov 24 09:29:47 compute-1 ceph-mon[80009]: mgrmap e29: compute-0.mauvni(active, since 1.07462s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:29:47 compute-1 ceph-mon[80009]: 8.11 scrub starts
Nov 24 09:29:47 compute-1 ceph-mon[80009]: 8.11 scrub ok
Nov 24 09:29:47 compute-1 ceph-mon[80009]: 11.14 deep-scrub starts
Nov 24 09:29:47 compute-1 ceph-mon[80009]: [24/Nov/2025:09:29:46] ENGINE Bus STARTING
Nov 24 09:29:47 compute-1 podman[86872]: 2025-11-24 09:29:47.120560799 +0000 UTC m=+0.049280873 container exec_died 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:47 compute-1 podman[86847]: 2025-11-24 09:29:47.124949202 +0000 UTC m=+0.116633111 container exec_died 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:47 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:47 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Nov 24 09:29:47 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 24 09:29:47 compute-1 podman[86921]: 2025-11-24 09:29:47.355190615 +0000 UTC m=+0.062258549 container exec f7b0c338b36b8bdf518e2bc42241679b81bdb5d1de06a8d4b736922ad905c10c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 09:29:47 compute-1 podman[86921]: 2025-11-24 09:29:47.368875608 +0000 UTC m=+0.075943522 container exec_died f7b0c338b36b8bdf518e2bc42241679b81bdb5d1de06a8d4b736922ad905c10c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 09:29:47 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 24 09:29:47 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 24 09:29:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:47 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:47 compute-1 podman[86986]: 2025-11-24 09:29:47.601079881 +0000 UTC m=+0.056181892 container exec 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 09:29:47 compute-1 podman[86986]: 2025-11-24 09:29:47.613835341 +0000 UTC m=+0.068937312 container exec_died 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 09:29:47 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:29:47 compute-1 podman[87049]: 2025-11-24 09:29:47.820194666 +0000 UTC m=+0.046893572 container exec b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container)
Nov 24 09:29:47 compute-1 podman[87049]: 2025-11-24 09:29:47.83469601 +0000 UTC m=+0.061394916 container exec_died b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, architecture=x86_64, com.redhat.component=keepalived-container, name=keepalived, release=1793, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, vendor=Red Hat, Inc., version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.openshift.tags=Ceph keepalived)
Nov 24 09:29:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:47.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:47 compute-1 sudo[86617]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:47 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:29:47 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:29:47 compute-1 sudo[87083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:29:47 compute-1 sudo[87083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:47 compute-1 sudo[87083]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:48 compute-1 sudo[87108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:29:48 compute-1 sudo[87108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:48 compute-1 ceph-mon[80009]: 11.14 deep-scrub ok
Nov 24 09:29:48 compute-1 ceph-mon[80009]: 8.13 scrub starts
Nov 24 09:29:48 compute-1 ceph-mon[80009]: 8.13 scrub ok
Nov 24 09:29:48 compute-1 ceph-mon[80009]: [24/Nov/2025:09:29:46] ENGINE Serving on http://192.168.122.100:8765
Nov 24 09:29:48 compute-1 ceph-mon[80009]: [24/Nov/2025:09:29:46] ENGINE Serving on https://192.168.122.100:7150
Nov 24 09:29:48 compute-1 ceph-mon[80009]: [24/Nov/2025:09:29:46] ENGINE Bus STARTED
Nov 24 09:29:48 compute-1 ceph-mon[80009]: [24/Nov/2025:09:29:46] ENGINE Client ('192.168.122.100', 37806) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 09:29:48 compute-1 ceph-mon[80009]: pgmap v4: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:29:48 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 24 09:29:48 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 24 09:29:48 compute-1 ceph-mon[80009]: 8.b scrub starts
Nov 24 09:29:48 compute-1 ceph-mon[80009]: 8.b scrub ok
Nov 24 09:29:48 compute-1 ceph-mon[80009]: 11.12 scrub starts
Nov 24 09:29:48 compute-1 ceph-mon[80009]: 11.12 scrub ok
Nov 24 09:29:48 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:48 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:48.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:48 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e78 e78: 3 total, 3 up, 3 in
Nov 24 09:29:48 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Nov 24 09:29:48 compute-1 sudo[87108]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:48 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Nov 24 09:29:48 compute-1 sudo[87165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:29:48 compute-1 sudo[87165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:48 compute-1 sudo[87165]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:48 compute-1 sudo[87190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Nov 24 09:29:48 compute-1 sudo[87190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:48 compute-1 sudo[87190]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:48 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:29:48 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:29:48 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 24 09:29:48 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:29:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:48 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:29:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:29:49 compute-1 ceph-mon[80009]: 11.11 scrub starts
Nov 24 09:29:49 compute-1 ceph-mon[80009]: 11.11 scrub ok
Nov 24 09:29:49 compute-1 ceph-mon[80009]: mgrmap e30: compute-0.mauvni(active, since 2s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:29:49 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 24 09:29:49 compute-1 ceph-mon[80009]: osdmap e78: 3 total, 3 up, 3 in
Nov 24 09:29:49 compute-1 ceph-mon[80009]: 11.8 scrub starts
Nov 24 09:29:49 compute-1 ceph-mon[80009]: 11.8 scrub ok
Nov 24 09:29:49 compute-1 ceph-mon[80009]: 11.1 scrub starts
Nov 24 09:29:49 compute-1 ceph-mon[80009]: 11.1 scrub ok
Nov 24 09:29:49 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:49 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:49 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:29:49 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:29:49 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:49 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:29:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:49 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:29:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Nov 24 09:29:49 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 24 09:29:49 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 8.8 deep-scrub starts
Nov 24 09:29:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:49 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf00023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:49 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 8.8 deep-scrub ok
Nov 24 09:29:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:49.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:50 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:29:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:50.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:50 compute-1 ceph-mon[80009]: 8.1d deep-scrub starts
Nov 24 09:29:50 compute-1 ceph-mon[80009]: 8.1d deep-scrub ok
Nov 24 09:29:50 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:50 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:50 compute-1 ceph-mon[80009]: pgmap v6: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:29:50 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 24 09:29:50 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 24 09:29:50 compute-1 ceph-mon[80009]: 12.2 deep-scrub starts
Nov 24 09:29:50 compute-1 ceph-mon[80009]: 12.2 deep-scrub ok
Nov 24 09:29:50 compute-1 ceph-mon[80009]: 8.8 deep-scrub starts
Nov 24 09:29:50 compute-1 ceph-mon[80009]: 8.8 deep-scrub ok
Nov 24 09:29:50 compute-1 ceph-mon[80009]: 10.2 scrub starts
Nov 24 09:29:50 compute-1 ceph-mon[80009]: 10.2 scrub ok
Nov 24 09:29:50 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:29:50 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 24 09:29:50 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:29:50 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e79 e79: 3 total, 3 up, 3 in
Nov 24 09:29:50 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:29:50 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:29:50 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 24 09:29:50 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:29:50 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:29:50 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:29:50 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:29:50 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:29:50 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 24 09:29:50 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 24 09:29:50 compute-1 sudo[87233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 24 09:29:50 compute-1 sudo[87233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:50 compute-1 sudo[87233]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:50 compute-1 sudo[87258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph
Nov 24 09:29:50 compute-1 sudo[87258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:50 compute-1 sudo[87258]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:50 compute-1 sudo[87283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:29:50 compute-1 sudo[87283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:50 compute-1 sudo[87283]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:50 compute-1 sudo[87308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:29:50 compute-1 sudo[87308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:50 compute-1 sudo[87308]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:50 compute-1 sshd-session[87316]: Accepted publickey for zuul from 192.168.122.30 port 56058 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:29:50 compute-1 sudo[87335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:29:50 compute-1 sudo[87335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:50 compute-1 sudo[87335]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:50 compute-1 systemd-logind[823]: New session 37 of user zuul.
Nov 24 09:29:50 compute-1 systemd[1]: Started Session 37 of User zuul.
Nov 24 09:29:50 compute-1 sshd-session[87316]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:29:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:50 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:50 compute-1 sudo[87385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:29:50 compute-1 sudo[87385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-1 sudo[87385]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-1 sudo[87437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:29:51 compute-1 sudo[87437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-1 sudo[87437]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-1 sudo[87487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 24 09:29:51 compute-1 sudo[87487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-1 sudo[87487]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:51 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:51 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:29:51 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:29:51 compute-1 ceph-mon[80009]: mgrmap e31: compute-0.mauvni(active, since 5s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:29:51 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 24 09:29:51 compute-1 ceph-mon[80009]: osdmap e79: 3 total, 3 up, 3 in
Nov 24 09:29:51 compute-1 ceph-mon[80009]: 12.3 scrub starts
Nov 24 09:29:51 compute-1 ceph-mon[80009]: 12.3 scrub ok
Nov 24 09:29:51 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:51 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:51 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:29:51 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:29:51 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:29:51 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:29:51 compute-1 ceph-mon[80009]: Updating compute-0:/etc/ceph/ceph.conf
Nov 24 09:29:51 compute-1 ceph-mon[80009]: Updating compute-1:/etc/ceph/ceph.conf
Nov 24 09:29:51 compute-1 ceph-mon[80009]: Updating compute-2:/etc/ceph/ceph.conf
Nov 24 09:29:51 compute-1 ceph-mon[80009]: 11.5 scrub starts
Nov 24 09:29:51 compute-1 ceph-mon[80009]: 11.5 scrub ok
Nov 24 09:29:51 compute-1 ceph-mon[80009]: 10.13 scrub starts
Nov 24 09:29:51 compute-1 ceph-mon[80009]: 10.13 scrub ok
Nov 24 09:29:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:51 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:51 compute-1 sudo[87512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:29:51 compute-1 sudo[87512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-1 sudo[87512]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e80 e80: 3 total, 3 up, 3 in
Nov 24 09:29:51 compute-1 sudo[87537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:29:51 compute-1 sudo[87537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-1 sudo[87537]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-1 sudo[87562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:29:51 compute-1 sudo[87562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-1 sudo[87562]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-1 sudo[87594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:29:51 compute-1 sudo[87594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-1 sudo[87594]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-1 sudo[87637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:29:51 compute-1 sudo[87637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-1 sudo[87637]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 24 09:29:51 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 24 09:29:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:51 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:51 compute-1 sudo[87727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:29:51 compute-1 sudo[87727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-1 sudo[87727]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-1 sudo[87783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:29:51 compute-1 sudo[87783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-1 sudo[87783]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-1 sudo[87808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:29:51 compute-1 sudo[87808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-1 sudo[87808]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-1 sudo[87833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 24 09:29:51 compute-1 sudo[87833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-1 sudo[87833]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-1 sudo[87858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph
Nov 24 09:29:51 compute-1 sudo[87858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-1 sudo[87858]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-1 sudo[87883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:29:51 compute-1 sudo[87883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:51 compute-1 sudo[87883]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:29:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:51.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:29:51 compute-1 python3.9[87782]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:29:51 compute-1 sudo[87909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:29:51 compute-1 sudo[87909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-1 sudo[87909]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-1 sudo[87941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:29:51 compute-1 sudo[87941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-1 sudo[87941]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-1 sudo[88009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:29:52 compute-1 sudo[88009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:52 compute-1 sudo[88009]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-1 sudo[88045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:29:52 compute-1 sudo[88045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:52 compute-1 sudo[88045]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:29:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:52.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:29:52 compute-1 sudo[88083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 24 09:29:52 compute-1 sudo[88083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:52 compute-1 sudo[88083]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-1 sudo[88108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:29:52 compute-1 sudo[88108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:52 compute-1 sudo[88108]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e81 e81: 3 total, 3 up, 3 in
Nov 24 09:29:52 compute-1 ceph-mon[80009]: Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:29:52 compute-1 ceph-mon[80009]: Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:29:52 compute-1 ceph-mon[80009]: osdmap e80: 3 total, 3 up, 3 in
Nov 24 09:29:52 compute-1 ceph-mon[80009]: pgmap v9: 353 pgs: 2 remapped+peering, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 13 op/s
Nov 24 09:29:52 compute-1 ceph-mon[80009]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:29:52 compute-1 ceph-mon[80009]: 11.3 scrub starts
Nov 24 09:29:52 compute-1 ceph-mon[80009]: 11.3 scrub ok
Nov 24 09:29:52 compute-1 ceph-mon[80009]: 11.4 scrub starts
Nov 24 09:29:52 compute-1 ceph-mon[80009]: 11.4 scrub ok
Nov 24 09:29:52 compute-1 ceph-mon[80009]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:29:52 compute-1 ceph-mon[80009]: 12.12 deep-scrub starts
Nov 24 09:29:52 compute-1 ceph-mon[80009]: 12.12 deep-scrub ok
Nov 24 09:29:52 compute-1 sudo[88133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:29:52 compute-1 sudo[88133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:52 compute-1 sudo[88133]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-1 sudo[88168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:29:52 compute-1 sudo[88168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:52 compute-1 sudo[88168]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-1 sudo[88194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:29:52 compute-1 sudo[88194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:52 compute-1 sudo[88194]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 24 09:29:52 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 24 09:29:52 compute-1 sudo[88219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:29:52 compute-1 sudo[88219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:52 compute-1 sudo[88219]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-1 sudo[88279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:29:52 compute-1 sudo[88279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:52 compute-1 sudo[88279]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-1 sudo[88316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:29:52 compute-1 sudo[88316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:52 compute-1 sudo[88316]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e82 e82: 3 total, 3 up, 3 in
Nov 24 09:29:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:29:52 compute-1 sudo[88341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:29:52 compute-1 sudo[88341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:52 compute-1 sudo[88341]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:29:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:29:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:29:52 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:52 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf00023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:29:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:53 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:53 compute-1 ceph-mon[80009]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:29:53 compute-1 ceph-mon[80009]: Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:29:53 compute-1 ceph-mon[80009]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:29:53 compute-1 ceph-mon[80009]: osdmap e81: 3 total, 3 up, 3 in
Nov 24 09:29:53 compute-1 ceph-mon[80009]: Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:29:53 compute-1 ceph-mon[80009]: 8.1f scrub starts
Nov 24 09:29:53 compute-1 ceph-mon[80009]: 8.1f scrub ok
Nov 24 09:29:53 compute-1 ceph-mon[80009]: 11.f scrub starts
Nov 24 09:29:53 compute-1 ceph-mon[80009]: 11.f scrub ok
Nov 24 09:29:53 compute-1 ceph-mon[80009]: 12.c scrub starts
Nov 24 09:29:53 compute-1 ceph-mon[80009]: 12.c scrub ok
Nov 24 09:29:53 compute-1 ceph-mon[80009]: osdmap e82: 3 total, 3 up, 3 in
Nov 24 09:29:53 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:53 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:53 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:53 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:53 compute-1 sudo[88491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcjcottrzvewftmhjoglvkdlulxspnie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976593.1447685-58-241835396690121/AnsiballZ_command.py'
Nov 24 09:29:53 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Nov 24 09:29:53 compute-1 sudo[88491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:29:53 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Nov 24 09:29:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:53 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:53 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:29:53 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:29:53 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:29:53 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:29:53 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:29:53 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:29:53 compute-1 python3.9[88493]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:29:53 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:29:53 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:29:53 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:29:53 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:29:53 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e83 e83: 3 total, 3 up, 3 in
Nov 24 09:29:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:53.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:54.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:54 compute-1 ceph-mon[80009]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:29:54 compute-1 ceph-mon[80009]: pgmap v12: 353 pgs: 2 remapped+peering, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 13 op/s
Nov 24 09:29:54 compute-1 ceph-mon[80009]: 10.4 deep-scrub starts
Nov 24 09:29:54 compute-1 ceph-mon[80009]: 10.4 deep-scrub ok
Nov 24 09:29:54 compute-1 ceph-mon[80009]: 8.18 scrub starts
Nov 24 09:29:54 compute-1 ceph-mon[80009]: 8.18 scrub ok
Nov 24 09:29:54 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:54 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:54 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:54 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:54 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:29:54 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:29:54 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:29:54 compute-1 ceph-mon[80009]: 12.b scrub starts
Nov 24 09:29:54 compute-1 ceph-mon[80009]: 12.b scrub ok
Nov 24 09:29:54 compute-1 ceph-mon[80009]: osdmap e83: 3 total, 3 up, 3 in
Nov 24 09:29:54 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Nov 24 09:29:54 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Nov 24 09:29:54 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:54 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:55 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:55 compute-1 ceph-mon[80009]: 12.1d scrub starts
Nov 24 09:29:55 compute-1 ceph-mon[80009]: 12.1d scrub ok
Nov 24 09:29:55 compute-1 ceph-mon[80009]: 11.7 scrub starts
Nov 24 09:29:55 compute-1 ceph-mon[80009]: 11.7 scrub ok
Nov 24 09:29:55 compute-1 ceph-mon[80009]: 12.e scrub starts
Nov 24 09:29:55 compute-1 ceph-mon[80009]: 12.e scrub ok
Nov 24 09:29:55 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 24 09:29:55 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 24 09:29:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:55 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:29:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:55.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:29:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:56.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:56 compute-1 ceph-mon[80009]: pgmap v14: 353 pgs: 2 remapped+peering, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 13 op/s
Nov 24 09:29:56 compute-1 ceph-mon[80009]: 8.5 deep-scrub starts
Nov 24 09:29:56 compute-1 ceph-mon[80009]: 8.5 deep-scrub ok
Nov 24 09:29:56 compute-1 ceph-mon[80009]: 11.1c scrub starts
Nov 24 09:29:56 compute-1 ceph-mon[80009]: 11.1c scrub ok
Nov 24 09:29:56 compute-1 ceph-mon[80009]: 10.8 scrub starts
Nov 24 09:29:56 compute-1 ceph-mon[80009]: 10.8 scrub ok
Nov 24 09:29:56 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 24 09:29:56 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 24 09:29:56 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:56 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:57 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Nov 24 09:29:57 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 24 09:29:57 compute-1 ceph-mon[80009]: 12.1e deep-scrub starts
Nov 24 09:29:57 compute-1 ceph-mon[80009]: 12.1e deep-scrub ok
Nov 24 09:29:57 compute-1 ceph-mon[80009]: 11.1e scrub starts
Nov 24 09:29:57 compute-1 ceph-mon[80009]: 11.1e scrub ok
Nov 24 09:29:57 compute-1 ceph-mon[80009]: 12.19 scrub starts
Nov 24 09:29:57 compute-1 ceph-mon[80009]: 12.19 scrub ok
Nov 24 09:29:57 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 24 09:29:57 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 24 09:29:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e84 e84: 3 total, 3 up, 3 in
Nov 24 09:29:57 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 8.1b deep-scrub starts
Nov 24 09:29:57 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 8.1b deep-scrub ok
Nov 24 09:29:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:57 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e85 e85: 3 total, 3 up, 3 in
Nov 24 09:29:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:29:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:29:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:57.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:29:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:58.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:58 compute-1 ceph-mon[80009]: pgmap v15: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Nov 24 09:29:58 compute-1 ceph-mon[80009]: 8.1c scrub starts
Nov 24 09:29:58 compute-1 ceph-mon[80009]: 8.1c scrub ok
Nov 24 09:29:58 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 24 09:29:58 compute-1 ceph-mon[80009]: osdmap e84: 3 total, 3 up, 3 in
Nov 24 09:29:58 compute-1 ceph-mon[80009]: 8.1b deep-scrub starts
Nov 24 09:29:58 compute-1 ceph-mon[80009]: 8.1b deep-scrub ok
Nov 24 09:29:58 compute-1 ceph-mon[80009]: 10.18 deep-scrub starts
Nov 24 09:29:58 compute-1 ceph-mon[80009]: 10.18 deep-scrub ok
Nov 24 09:29:58 compute-1 ceph-mon[80009]: osdmap e85: 3 total, 3 up, 3 in
Nov 24 09:29:58 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 24 09:29:58 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 24 09:29:58 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:29:58 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:29:58 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Nov 24 09:29:58 compute-1 sudo[88520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:29:58 compute-1 sudo[88520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:58 compute-1 sudo[88520]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:58 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e86 e86: 3 total, 3 up, 3 in
Nov 24 09:29:58 compute-1 sudo[88545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:29:58 compute-1 sudo[88545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:58 compute-1 sudo[88545]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:58 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 24 09:29:58 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 09:29:58 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 24 09:29:58 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 24 09:29:58 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:29:58 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:29:58 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:58 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:59 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Nov 24 09:29:59 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 24 09:29:59 compute-1 ceph-mon[80009]: 12.4 scrub starts
Nov 24 09:29:59 compute-1 ceph-mon[80009]: 12.4 scrub ok
Nov 24 09:29:59 compute-1 ceph-mon[80009]: 11.1d scrub starts
Nov 24 09:29:59 compute-1 ceph-mon[80009]: 11.1d scrub ok
Nov 24 09:29:59 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:59 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:59 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:59 compute-1 ceph-mon[80009]: 12.1c scrub starts
Nov 24 09:29:59 compute-1 ceph-mon[80009]: 12.1c scrub ok
Nov 24 09:29:59 compute-1 ceph-mon[80009]: osdmap e86: 3 total, 3 up, 3 in
Nov 24 09:29:59 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 09:29:59 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 24 09:29:59 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:29:59 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 24 09:29:59 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 24 09:29:59 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 24 09:29:59 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 24 09:29:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:59 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:29:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:29:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.mauvni", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Nov 24 09:29:59 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mauvni", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 09:29:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 24 09:29:59 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 09:29:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:29:59 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:29:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e87 e87: 3 total, 3 up, 3 in
Nov 24 09:29:59 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 87 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:59 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 87 pg[9.a( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:29:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:29:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:59.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:30:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:30:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:00.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:30:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:30:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:30:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:30:00 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 24 09:30:00 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 24 09:30:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:30:00 compute-1 ceph-mon[80009]: Reconfiguring mon.compute-0 (monmap changed)...
Nov 24 09:30:00 compute-1 ceph-mon[80009]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 24 09:30:00 compute-1 ceph-mon[80009]: 11.19 scrub starts
Nov 24 09:30:00 compute-1 ceph-mon[80009]: 11.19 scrub ok
Nov 24 09:30:00 compute-1 ceph-mon[80009]: pgmap v19: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 40 B/s, 2 objects/s recovering
Nov 24 09:30:00 compute-1 ceph-mon[80009]: 8.4 scrub starts
Nov 24 09:30:00 compute-1 ceph-mon[80009]: 8.4 scrub ok
Nov 24 09:30:00 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:00 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:00 compute-1 ceph-mon[80009]: 10.19 scrub starts
Nov 24 09:30:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mauvni", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 09:30:00 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mauvni", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 09:30:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 09:30:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:00 compute-1 ceph-mon[80009]: 10.19 scrub ok
Nov 24 09:30:00 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 24 09:30:00 compute-1 ceph-mon[80009]: osdmap e87: 3 total, 3 up, 3 in
Nov 24 09:30:00 compute-1 ceph-mon[80009]: overall HEALTH_OK
Nov 24 09:30:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:30:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 24 09:30:00 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 09:30:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:30:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:00 compute-1 sudo[88491]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e88 e88: 3 total, 3 up, 3 in
Nov 24 09:30:00 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 88 pg[9.a( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:00 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 88 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:00 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 88 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:30:00 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 88 pg[9.a( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:30:00 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:00 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:30:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:30:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Nov 24 09:30:01 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 24 09:30:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:30:01 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:01 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:01 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 24 09:30:01 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 24 09:30:01 compute-1 ceph-mon[80009]: Reconfiguring mgr.compute-0.mauvni (monmap changed)...
Nov 24 09:30:01 compute-1 ceph-mon[80009]: Reconfiguring daemon mgr.compute-0.mauvni on compute-0
Nov 24 09:30:01 compute-1 ceph-mon[80009]: 8.f scrub starts
Nov 24 09:30:01 compute-1 ceph-mon[80009]: 8.f scrub ok
Nov 24 09:30:01 compute-1 ceph-mon[80009]: 11.1b scrub starts
Nov 24 09:30:01 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:01 compute-1 ceph-mon[80009]: 11.1b scrub ok
Nov 24 09:30:01 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 09:30:01 compute-1 ceph-mon[80009]: Reconfiguring crash.compute-0 (monmap changed)...
Nov 24 09:30:01 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 09:30:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:01 compute-1 ceph-mon[80009]: Reconfiguring daemon crash.compute-0 on compute-0
Nov 24 09:30:01 compute-1 ceph-mon[80009]: 12.8 scrub starts
Nov 24 09:30:01 compute-1 ceph-mon[80009]: 12.8 scrub ok
Nov 24 09:30:01 compute-1 ceph-mon[80009]: osdmap e88: 3 total, 3 up, 3 in
Nov 24 09:30:01 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:01 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 24 09:30:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:01 compute-1 ceph-mon[80009]: 8.3 deep-scrub starts
Nov 24 09:30:01 compute-1 ceph-mon[80009]: 8.3 deep-scrub ok
Nov 24 09:30:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:01 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:01.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e89 e89: 3 total, 3 up, 3 in
Nov 24 09:30:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:30:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:30:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zlrxyg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 24 09:30:01 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zlrxyg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 09:30:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Nov 24 09:30:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:30:01 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:30:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:02.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:30:02 compute-1 sshd-session[87365]: Connection closed by 192.168.122.30 port 56058
Nov 24 09:30:02 compute-1 sshd-session[87316]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:30:02 compute-1 systemd[1]: session-37.scope: Deactivated successfully.
Nov 24 09:30:02 compute-1 systemd[1]: session-37.scope: Consumed 7.897s CPU time.
Nov 24 09:30:02 compute-1 systemd-logind[823]: Session 37 logged out. Waiting for processes to exit.
Nov 24 09:30:02 compute-1 systemd-logind[823]: Removed session 37.
Nov 24 09:30:02 compute-1 ceph-mon[80009]: Reconfiguring osd.0 (monmap changed)...
Nov 24 09:30:02 compute-1 ceph-mon[80009]: Reconfiguring daemon osd.0 on compute-0
Nov 24 09:30:02 compute-1 ceph-mon[80009]: pgmap v22: 353 pgs: 2 remapped+peering, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:02 compute-1 ceph-mon[80009]: 11.1a scrub starts
Nov 24 09:30:02 compute-1 ceph-mon[80009]: 11.1a scrub ok
Nov 24 09:30:02 compute-1 ceph-mon[80009]: 10.14 scrub starts
Nov 24 09:30:02 compute-1 ceph-mon[80009]: 10.14 scrub ok
Nov 24 09:30:02 compute-1 ceph-mon[80009]: osdmap e89: 3 total, 3 up, 3 in
Nov 24 09:30:02 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:02 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:02 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zlrxyg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 09:30:02 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zlrxyg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 09:30:02 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:02 compute-1 ceph-mon[80009]: 12.7 scrub starts
Nov 24 09:30:02 compute-1 ceph-mon[80009]: 12.7 scrub ok
Nov 24 09:30:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:30:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:30:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e90 e90: 3 total, 3 up, 3 in
Nov 24 09:30:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:30:02 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 90 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=5 ec=54/39 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:02 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 90 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=5 ec=54/39 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:02 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 90 pg[9.a( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=6 ec=54/39 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:02 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 90 pg[9.a( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=6 ec=54/39 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:02 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:02 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:03 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:03 compute-1 ceph-mon[80009]: Reconfiguring rgw.rgw.compute-0.zlrxyg (unknown last config time)...
Nov 24 09:30:03 compute-1 ceph-mon[80009]: Reconfiguring daemon rgw.rgw.compute-0.zlrxyg on compute-0
Nov 24 09:30:03 compute-1 ceph-mon[80009]: 12.6 deep-scrub starts
Nov 24 09:30:03 compute-1 ceph-mon[80009]: 12.6 deep-scrub ok
Nov 24 09:30:03 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:03 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:03 compute-1 ceph-mon[80009]: osdmap e90: 3 total, 3 up, 3 in
Nov 24 09:30:03 compute-1 ceph-mon[80009]: 10.3 scrub starts
Nov 24 09:30:03 compute-1 ceph-mon[80009]: 10.3 scrub ok
Nov 24 09:30:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:03 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:03 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e91 e91: 3 total, 3 up, 3 in
Nov 24 09:30:03 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 91 pg[9.a( v 45'1130 (0'0,45'1130] local-lis/les=90/91 n=6 ec=54/39 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:30:03 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 91 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=90/91 n=5 ec=54/39 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:30:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:03.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:03 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:30:03 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:30:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:04.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:04 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 24 09:30:04 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 24 09:30:04 compute-1 ceph-mon[80009]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Nov 24 09:30:04 compute-1 ceph-mon[80009]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Nov 24 09:30:04 compute-1 ceph-mon[80009]: pgmap v25: 353 pgs: 1 active+clean+scrubbing+deep, 2 remapped+peering, 350 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 59 B/s, 2 objects/s recovering
Nov 24 09:30:04 compute-1 ceph-mon[80009]: 12.10 scrub starts
Nov 24 09:30:04 compute-1 ceph-mon[80009]: 12.10 scrub ok
Nov 24 09:30:04 compute-1 ceph-mon[80009]: osdmap e91: 3 total, 3 up, 3 in
Nov 24 09:30:04 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:04 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:04 compute-1 ceph-mon[80009]: 8.6 scrub starts
Nov 24 09:30:04 compute-1 ceph-mon[80009]: 8.6 scrub ok
Nov 24 09:30:04 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:04 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:05 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:05 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 24 09:30:05 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 24 09:30:05 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:30:05 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:30:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:05 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:05 compute-1 ceph-mon[80009]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Nov 24 09:30:05 compute-1 ceph-mon[80009]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Nov 24 09:30:05 compute-1 ceph-mon[80009]: 9.a scrub starts
Nov 24 09:30:05 compute-1 ceph-mon[80009]: 9.a scrub ok
Nov 24 09:30:05 compute-1 ceph-mon[80009]: 10.15 deep-scrub starts
Nov 24 09:30:05 compute-1 ceph-mon[80009]: 10.15 deep-scrub ok
Nov 24 09:30:05 compute-1 ceph-mon[80009]: 12.18 scrub starts
Nov 24 09:30:05 compute-1 ceph-mon[80009]: 12.18 scrub ok
Nov 24 09:30:05 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:05 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:30:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:05.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:30:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:06.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:06 compute-1 ceph-mon[80009]: pgmap v27: 353 pgs: 1 active+clean+scrubbing+deep, 2 remapped+peering, 350 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Nov 24 09:30:06 compute-1 ceph-mon[80009]: 9.1a scrub starts
Nov 24 09:30:06 compute-1 ceph-mon[80009]: 9.1a scrub ok
Nov 24 09:30:06 compute-1 ceph-mon[80009]: Reconfiguring grafana.compute-0 (dependencies changed)...
Nov 24 09:30:06 compute-1 ceph-mon[80009]: Reconfiguring daemon grafana.compute-0 on compute-0
Nov 24 09:30:06 compute-1 ceph-mon[80009]: 12.a scrub starts
Nov 24 09:30:06 compute-1 ceph-mon[80009]: 12.a scrub ok
Nov 24 09:30:06 compute-1 ceph-mon[80009]: 9.d scrub starts
Nov 24 09:30:06 compute-1 ceph-mon[80009]: 9.d scrub ok
Nov 24 09:30:06 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:06 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:07 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:07 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Nov 24 09:30:07 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 24 09:30:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:07 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:07 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:30:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:07.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:08 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e92 e92: 3 total, 3 up, 3 in
Nov 24 09:30:08 compute-1 ceph-mon[80009]: 9.c deep-scrub starts
Nov 24 09:30:08 compute-1 ceph-mon[80009]: 9.c deep-scrub ok
Nov 24 09:30:08 compute-1 ceph-mon[80009]: 9.13 scrub starts
Nov 24 09:30:08 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 24 09:30:08 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 24 09:30:08 compute-1 ceph-mon[80009]: 9.13 scrub ok
Nov 24 09:30:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:08.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:08 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Nov 24 09:30:08 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Nov 24 09:30:08 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:30:08 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:30:08 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 24 09:30:08 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 09:30:08 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:30:08 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:08 compute-1 sudo[88609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:30:08 compute-1 sudo[88609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:08 compute-1 sudo[88609]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:08 compute-1 sudo[88634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:30:08 compute-1 sudo[88634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:08 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:08 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:09 compute-1 ceph-mon[80009]: pgmap v28: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:09 compute-1 ceph-mon[80009]: 9.0 scrub starts
Nov 24 09:30:09 compute-1 ceph-mon[80009]: 9.0 scrub ok
Nov 24 09:30:09 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 24 09:30:09 compute-1 ceph-mon[80009]: osdmap e92: 3 total, 3 up, 3 in
Nov 24 09:30:09 compute-1 ceph-mon[80009]: 9.9 scrub starts
Nov 24 09:30:09 compute-1 ceph-mon[80009]: 9.9 scrub ok
Nov 24 09:30:09 compute-1 ceph-mon[80009]: 8.12 scrub starts
Nov 24 09:30:09 compute-1 ceph-mon[80009]: 8.12 scrub ok
Nov 24 09:30:09 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:09 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:09 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 09:30:09 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 09:30:09 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:09 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:09 compute-1 podman[88673]: 2025-11-24 09:30:09.239632268 +0000 UTC m=+0.038733711 container create 810a29e468c068dc53ed0dcb6d8db177b483bc9c1d39e5bd4dbc0de043f9b009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:30:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Nov 24 09:30:09 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 24 09:30:09 compute-1 systemd[1]: Started libpod-conmon-810a29e468c068dc53ed0dcb6d8db177b483bc9c1d39e5bd4dbc0de043f9b009.scope.
Nov 24 09:30:09 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:30:09 compute-1 podman[88673]: 2025-11-24 09:30:09.220881874 +0000 UTC m=+0.019983327 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:30:09 compute-1 podman[88673]: 2025-11-24 09:30:09.326188202 +0000 UTC m=+0.125289675 container init 810a29e468c068dc53ed0dcb6d8db177b483bc9c1d39e5bd4dbc0de043f9b009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_thompson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:30:09 compute-1 podman[88673]: 2025-11-24 09:30:09.332227217 +0000 UTC m=+0.131328660 container start 810a29e468c068dc53ed0dcb6d8db177b483bc9c1d39e5bd4dbc0de043f9b009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:30:09 compute-1 podman[88673]: 2025-11-24 09:30:09.335369058 +0000 UTC m=+0.134470501 container attach 810a29e468c068dc53ed0dcb6d8db177b483bc9c1d39e5bd4dbc0de043f9b009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 09:30:09 compute-1 clever_thompson[88690]: 167 167
Nov 24 09:30:09 compute-1 systemd[1]: libpod-810a29e468c068dc53ed0dcb6d8db177b483bc9c1d39e5bd4dbc0de043f9b009.scope: Deactivated successfully.
Nov 24 09:30:09 compute-1 podman[88673]: 2025-11-24 09:30:09.349242116 +0000 UTC m=+0.148343559 container died 810a29e468c068dc53ed0dcb6d8db177b483bc9c1d39e5bd4dbc0de043f9b009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 24 09:30:09 compute-1 systemd[1]: var-lib-containers-storage-overlay-071a74f7b29a3951ec7097108bf579a9545642a78f247652166c2d5c14206ba1-merged.mount: Deactivated successfully.
Nov 24 09:30:09 compute-1 podman[88673]: 2025-11-24 09:30:09.410098857 +0000 UTC m=+0.209200310 container remove 810a29e468c068dc53ed0dcb6d8db177b483bc9c1d39e5bd4dbc0de043f9b009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 09:30:09 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 8.19 deep-scrub starts
Nov 24 09:30:09 compute-1 systemd[1]: libpod-conmon-810a29e468c068dc53ed0dcb6d8db177b483bc9c1d39e5bd4dbc0de043f9b009.scope: Deactivated successfully.
Nov 24 09:30:09 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 8.19 deep-scrub ok
Nov 24 09:30:09 compute-1 sudo[88634]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:30:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:30:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Nov 24 09:30:09 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 24 09:30:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:30:09 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:09 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:09 compute-1 sudo[88707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:30:09 compute-1 sudo[88707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:09 compute-1 sudo[88707]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:09 compute-1 sudo[88732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:30:09 compute-1 sudo[88732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:09.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:09 compute-1 podman[88774]: 2025-11-24 09:30:09.933521777 +0000 UTC m=+0.055029922 container create a0f40af6dbc69dae8b3f149463d97a897a5bfea0a004958cad67ddf65a534638 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 09:30:09 compute-1 systemd[1]: Started libpod-conmon-a0f40af6dbc69dae8b3f149463d97a897a5bfea0a004958cad67ddf65a534638.scope.
Nov 24 09:30:09 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:30:10 compute-1 podman[88774]: 2025-11-24 09:30:09.914892185 +0000 UTC m=+0.036400320 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:30:10 compute-1 podman[88774]: 2025-11-24 09:30:10.026112986 +0000 UTC m=+0.147621191 container init a0f40af6dbc69dae8b3f149463d97a897a5bfea0a004958cad67ddf65a534638 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_lichterman, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 09:30:10 compute-1 podman[88774]: 2025-11-24 09:30:10.035540809 +0000 UTC m=+0.157048964 container start a0f40af6dbc69dae8b3f149463d97a897a5bfea0a004958cad67ddf65a534638 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_lichterman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 09:30:10 compute-1 vigorous_lichterman[88790]: 167 167
Nov 24 09:30:10 compute-1 systemd[1]: libpod-a0f40af6dbc69dae8b3f149463d97a897a5bfea0a004958cad67ddf65a534638.scope: Deactivated successfully.
Nov 24 09:30:10 compute-1 podman[88774]: 2025-11-24 09:30:10.041581785 +0000 UTC m=+0.163089980 container attach a0f40af6dbc69dae8b3f149463d97a897a5bfea0a004958cad67ddf65a534638 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_lichterman, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 09:30:10 compute-1 podman[88774]: 2025-11-24 09:30:10.042136489 +0000 UTC m=+0.163644614 container died a0f40af6dbc69dae8b3f149463d97a897a5bfea0a004958cad67ddf65a534638 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:30:10 compute-1 systemd[1]: var-lib-containers-storage-overlay-650615f4fc28d1f143e251d8662732aa776bd1b2c36a0113362c109be95cbaf1-merged.mount: Deactivated successfully.
Nov 24 09:30:10 compute-1 podman[88774]: 2025-11-24 09:30:10.131773383 +0000 UTC m=+0.253281498 container remove a0f40af6dbc69dae8b3f149463d97a897a5bfea0a004958cad67ddf65a534638 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_lichterman, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:30:10 compute-1 systemd[1]: libpod-conmon-a0f40af6dbc69dae8b3f149463d97a897a5bfea0a004958cad67ddf65a534638.scope: Deactivated successfully.
Nov 24 09:30:10 compute-1 ceph-mon[80009]: 9.1 scrub starts
Nov 24 09:30:10 compute-1 ceph-mon[80009]: 9.1 scrub ok
Nov 24 09:30:10 compute-1 ceph-mon[80009]: Reconfiguring crash.compute-1 (monmap changed)...
Nov 24 09:30:10 compute-1 ceph-mon[80009]: Reconfiguring daemon crash.compute-1 on compute-1
Nov 24 09:30:10 compute-1 ceph-mon[80009]: 9.f scrub starts
Nov 24 09:30:10 compute-1 ceph-mon[80009]: 9.f scrub ok
Nov 24 09:30:10 compute-1 ceph-mon[80009]: pgmap v30: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:10 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 24 09:30:10 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 24 09:30:10 compute-1 ceph-mon[80009]: 8.19 deep-scrub starts
Nov 24 09:30:10 compute-1 ceph-mon[80009]: 8.19 deep-scrub ok
Nov 24 09:30:10 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:10 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:10 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 24 09:30:10 compute-1 ceph-mon[80009]: Reconfiguring osd.1 (monmap changed)...
Nov 24 09:30:10 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:10 compute-1 ceph-mon[80009]: Reconfiguring daemon osd.1 on compute-1
Nov 24 09:30:10 compute-1 ceph-mon[80009]: 9.4 scrub starts
Nov 24 09:30:10 compute-1 ceph-mon[80009]: 9.4 scrub ok
Nov 24 09:30:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:10.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e93 e93: 3 total, 3 up, 3 in
Nov 24 09:30:10 compute-1 sudo[88732]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:30:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:30:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 24 09:30:10 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 09:30:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 24 09:30:10 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 24 09:30:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:30:10 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:10 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.e scrub starts
Nov 24 09:30:10 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.e scrub ok
Nov 24 09:30:10 compute-1 sudo[88816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:30:10 compute-1 sudo[88816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:10 compute-1 sudo[88816]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:10 compute-1 sudo[88841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:30:10 compute-1 sudo[88841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:10 compute-1 podman[88883]: 2025-11-24 09:30:10.759760691 +0000 UTC m=+0.036172005 container create 65d629c135d605f86361d63ab0b7424b7e54b6d77df33ce2028395405ef5fbd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 09:30:10 compute-1 systemd[1]: Started libpod-conmon-65d629c135d605f86361d63ab0b7424b7e54b6d77df33ce2028395405ef5fbd0.scope.
Nov 24 09:30:10 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:30:10 compute-1 podman[88883]: 2025-11-24 09:30:10.82910636 +0000 UTC m=+0.105517674 container init 65d629c135d605f86361d63ab0b7424b7e54b6d77df33ce2028395405ef5fbd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:30:10 compute-1 podman[88883]: 2025-11-24 09:30:10.835110395 +0000 UTC m=+0.111521709 container start 65d629c135d605f86361d63ab0b7424b7e54b6d77df33ce2028395405ef5fbd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curie, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:30:10 compute-1 gifted_curie[88899]: 167 167
Nov 24 09:30:10 compute-1 podman[88883]: 2025-11-24 09:30:10.838612986 +0000 UTC m=+0.115024310 container attach 65d629c135d605f86361d63ab0b7424b7e54b6d77df33ce2028395405ef5fbd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curie, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 09:30:10 compute-1 systemd[1]: libpod-65d629c135d605f86361d63ab0b7424b7e54b6d77df33ce2028395405ef5fbd0.scope: Deactivated successfully.
Nov 24 09:30:10 compute-1 podman[88883]: 2025-11-24 09:30:10.839773166 +0000 UTC m=+0.116184510 container died 65d629c135d605f86361d63ab0b7424b7e54b6d77df33ce2028395405ef5fbd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curie, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 24 09:30:10 compute-1 podman[88883]: 2025-11-24 09:30:10.744825365 +0000 UTC m=+0.021236709 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:30:10 compute-1 systemd[1]: var-lib-containers-storage-overlay-480b8e5e575b99323c93049fa49456f2ad259b10b3f094cc3dd84886096e0169-merged.mount: Deactivated successfully.
Nov 24 09:30:10 compute-1 podman[88883]: 2025-11-24 09:30:10.871829563 +0000 UTC m=+0.148240867 container remove 65d629c135d605f86361d63ab0b7424b7e54b6d77df33ce2028395405ef5fbd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curie, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:30:10 compute-1 systemd[1]: libpod-conmon-65d629c135d605f86361d63ab0b7424b7e54b6d77df33ce2028395405ef5fbd0.scope: Deactivated successfully.
Nov 24 09:30:10 compute-1 sudo[88841]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:30:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:30:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 24 09:30:10 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 09:30:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 24 09:30:10 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 24 09:30:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:30:10 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:10 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:10 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:11 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 24 09:30:11 compute-1 ceph-mon[80009]: osdmap e93: 3 total, 3 up, 3 in
Nov 24 09:30:11 compute-1 ceph-mon[80009]: 9.8 scrub starts
Nov 24 09:30:11 compute-1 ceph-mon[80009]: 9.8 scrub ok
Nov 24 09:30:11 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:11 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:11 compute-1 ceph-mon[80009]: Reconfiguring mon.compute-1 (monmap changed)...
Nov 24 09:30:11 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 09:30:11 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 24 09:30:11 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:11 compute-1 ceph-mon[80009]: Reconfiguring daemon mon.compute-1 on compute-1
Nov 24 09:30:11 compute-1 ceph-mon[80009]: 9.e scrub starts
Nov 24 09:30:11 compute-1 ceph-mon[80009]: 9.e scrub ok
Nov 24 09:30:11 compute-1 ceph-mon[80009]: 9.1c scrub starts
Nov 24 09:30:11 compute-1 ceph-mon[80009]: 9.1c scrub ok
Nov 24 09:30:11 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:11 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:11 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 09:30:11 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 24 09:30:11 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:11 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Nov 24 09:30:11 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 24 09:30:11 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 24 09:30:11 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 24 09:30:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:11 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:30:11 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:30:11 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.rzcnzg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Nov 24 09:30:11 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.rzcnzg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 09:30:11 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 24 09:30:11 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 09:30:11 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:30:11 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:11.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:30:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:12.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:30:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e94 e94: 3 total, 3 up, 3 in
Nov 24 09:30:12 compute-1 ceph-mon[80009]: Reconfiguring mon.compute-2 (monmap changed)...
Nov 24 09:30:12 compute-1 ceph-mon[80009]: Reconfiguring daemon mon.compute-2 on compute-2
Nov 24 09:30:12 compute-1 ceph-mon[80009]: 9.b deep-scrub starts
Nov 24 09:30:12 compute-1 ceph-mon[80009]: 9.b deep-scrub ok
Nov 24 09:30:12 compute-1 ceph-mon[80009]: pgmap v32: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:12 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 24 09:30:12 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 24 09:30:12 compute-1 ceph-mon[80009]: 9.6 scrub starts
Nov 24 09:30:12 compute-1 ceph-mon[80009]: 9.6 scrub ok
Nov 24 09:30:12 compute-1 ceph-mon[80009]: 9.12 scrub starts
Nov 24 09:30:12 compute-1 ceph-mon[80009]: 9.12 scrub ok
Nov 24 09:30:12 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:12 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:12 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.rzcnzg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 09:30:12 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.rzcnzg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 09:30:12 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 09:30:12 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:12 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 94 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=70/70 les/c/f=71/71/0 sis=94) [1] r=0 lpr=94 pi=[70,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:12 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 94 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=70/70 les/c/f=71/71/0 sis=94) [1] r=0 lpr=94 pi=[70,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:12 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Nov 24 09:30:12 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Nov 24 09:30:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:30:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:30:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Nov 24 09:30:12 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Nov 24 09:30:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Nov 24 09:30:12 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Nov 24 09:30:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Nov 24 09:30:12 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Nov 24 09:30:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Nov 24 09:30:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:30:12 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:12 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:13 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e95 e95: 3 total, 3 up, 3 in
Nov 24 09:30:13 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 95 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=70/70 les/c/f=71/71/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[70,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:13 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 95 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=70/70 les/c/f=71/71/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[70,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:30:13 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 95 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=70/70 les/c/f=71/71/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[70,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:13 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 95 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=70/70 les/c/f=71/71/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[70,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:30:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:13 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:13 compute-1 ceph-mon[80009]: Reconfiguring mgr.compute-2.rzcnzg (monmap changed)...
Nov 24 09:30:13 compute-1 ceph-mon[80009]: Reconfiguring daemon mgr.compute-2.rzcnzg on compute-2
Nov 24 09:30:13 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 24 09:30:13 compute-1 ceph-mon[80009]: 9.3 scrub starts
Nov 24 09:30:13 compute-1 ceph-mon[80009]: osdmap e94: 3 total, 3 up, 3 in
Nov 24 09:30:13 compute-1 ceph-mon[80009]: 9.3 scrub ok
Nov 24 09:30:13 compute-1 ceph-mon[80009]: 9.1e scrub starts
Nov 24 09:30:13 compute-1 ceph-mon[80009]: 9.1e scrub ok
Nov 24 09:30:13 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:13 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:13 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Nov 24 09:30:13 compute-1 ceph-mon[80009]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Nov 24 09:30:13 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Nov 24 09:30:13 compute-1 ceph-mon[80009]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Nov 24 09:30:13 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Nov 24 09:30:13 compute-1 ceph-mon[80009]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Nov 24 09:30:13 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:13 compute-1 ceph-mon[80009]: osdmap e95: 3 total, 3 up, 3 in
Nov 24 09:30:13 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Nov 24 09:30:13 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 24 09:30:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:13 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:13.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e96 e96: 3 total, 3 up, 3 in
Nov 24 09:30:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:14.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:14 compute-1 ceph-mon[80009]: 9.7 scrub starts
Nov 24 09:30:14 compute-1 ceph-mon[80009]: 9.7 scrub ok
Nov 24 09:30:14 compute-1 ceph-mon[80009]: pgmap v35: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:14 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 24 09:30:14 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 24 09:30:14 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 24 09:30:14 compute-1 ceph-mon[80009]: osdmap e96: 3 total, 3 up, 3 in
Nov 24 09:30:14 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:14 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:15 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:15 compute-1 ceph-mon[80009]: 9.1b scrub starts
Nov 24 09:30:15 compute-1 ceph-mon[80009]: 9.1b scrub ok
Nov 24 09:30:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e97 e97: 3 total, 3 up, 3 in
Nov 24 09:30:15 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 97 pg[9.d( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=8 ec=54/39 lis/c=95/70 les/c/f=96/71/0 sis=97) [1] r=0 lpr=97 pi=[70,97)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:15 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 97 pg[9.d( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=8 ec=54/39 lis/c=95/70 les/c/f=96/71/0 sis=97) [1] r=0 lpr=97 pi=[70,97)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:15 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 97 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=5 ec=54/39 lis/c=95/70 les/c/f=96/71/0 sis=97) [1] r=0 lpr=97 pi=[70,97)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:15 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 97 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=5 ec=54/39 lis/c=95/70 les/c/f=96/71/0 sis=97) [1] r=0 lpr=97 pi=[70,97)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Nov 24 09:30:15 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 24 09:30:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:30:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:30:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:15 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:15.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:30:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:30:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:30:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:30:15 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:30:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:30:16 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:30:16 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:30:16 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:30:16 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:30:16 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:30:16 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:30:16 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:30:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:16.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:30:16 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e98 e98: 3 total, 3 up, 3 in
Nov 24 09:30:16 compute-1 ceph-mon[80009]: 9.18 scrub starts
Nov 24 09:30:16 compute-1 ceph-mon[80009]: 9.18 scrub ok
Nov 24 09:30:16 compute-1 ceph-mon[80009]: osdmap e97: 3 total, 3 up, 3 in
Nov 24 09:30:16 compute-1 ceph-mon[80009]: pgmap v38: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 24 09:30:16 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 24 09:30:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:30:16 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:16 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:30:16 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:16 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:30:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:30:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 98 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=66/66 les/c/f=67/67/0 sis=98) [1] r=0 lpr=98 pi=[66,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 98 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=66/66 les/c/f=67/67/0 sis=98) [1] r=0 lpr=98 pi=[66,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 98 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=97/98 n=5 ec=54/39 lis/c=95/70 les/c/f=96/71/0 sis=97) [1] r=0 lpr=97 pi=[70,97)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:30:16 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 98 pg[9.d( v 45'1130 (0'0,45'1130] local-lis/les=97/98 n=8 ec=54/39 lis/c=95/70 les/c/f=96/71/0 sis=97) [1] r=0 lpr=97 pi=[70,97)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:30:16 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Nov 24 09:30:16 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Nov 24 09:30:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:16 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:17 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:17 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e99 e99: 3 total, 3 up, 3 in
Nov 24 09:30:17 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 99 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=66/66 les/c/f=67/67/0 sis=99) [1]/[2] r=-1 lpr=99 pi=[66,99)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:17 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 99 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=66/66 les/c/f=67/67/0 sis=99) [1]/[2] r=-1 lpr=99 pi=[66,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:30:17 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 99 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=66/66 les/c/f=67/67/0 sis=99) [1]/[2] r=-1 lpr=99 pi=[66,99)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:17 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 99 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=66/66 les/c/f=67/67/0 sis=99) [1]/[2] r=-1 lpr=99 pi=[66,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:30:17 compute-1 ceph-mon[80009]: 9.19 scrub starts
Nov 24 09:30:17 compute-1 ceph-mon[80009]: 9.19 scrub ok
Nov 24 09:30:17 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 24 09:30:17 compute-1 ceph-mon[80009]: osdmap e98: 3 total, 3 up, 3 in
Nov 24 09:30:17 compute-1 ceph-mon[80009]: 9.1d scrub starts
Nov 24 09:30:17 compute-1 ceph-mon[80009]: 9.1d scrub ok
Nov 24 09:30:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:17 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:17 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:30:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:30:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:17.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:30:18 compute-1 sshd-session[88919]: Accepted publickey for zuul from 192.168.122.30 port 46418 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:30:18 compute-1 systemd-logind[823]: New session 38 of user zuul.
Nov 24 09:30:18 compute-1 systemd[1]: Started Session 38 of User zuul.
Nov 24 09:30:18 compute-1 sshd-session[88919]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:30:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:18.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:18 compute-1 ceph-mon[80009]: 9.5 scrub starts
Nov 24 09:30:18 compute-1 ceph-mon[80009]: 9.5 scrub ok
Nov 24 09:30:18 compute-1 ceph-mon[80009]: pgmap v40: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:18 compute-1 ceph-mon[80009]: osdmap e99: 3 total, 3 up, 3 in
Nov 24 09:30:18 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e100 e100: 3 total, 3 up, 3 in
Nov 24 09:30:18 compute-1 python3.9[89072]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 24 09:30:18 compute-1 sudo[89073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:30:18 compute-1 sudo[89073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:18 compute-1 sudo[89073]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:18 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:19 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e101 e101: 3 total, 3 up, 3 in
Nov 24 09:30:19 compute-1 ceph-mon[80009]: osdmap e100: 3 total, 3 up, 3 in
Nov 24 09:30:19 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 101 pg[9.f( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=7 ec=54/39 lis/c=99/66 les/c/f=100/67/0 sis=101) [1] r=0 lpr=101 pi=[66,101)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:19 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 101 pg[9.f( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=7 ec=54/39 lis/c=99/66 les/c/f=100/67/0 sis=101) [1] r=0 lpr=101 pi=[66,101)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:19 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 101 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=5 ec=54/39 lis/c=99/66 les/c/f=100/67/0 sis=101) [1] r=0 lpr=101 pi=[66,101)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:19 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 101 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=5 ec=54/39 lis/c=99/66 les/c/f=100/67/0 sis=101) [1] r=0 lpr=101 pi=[66,101)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:19 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:19.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:20 compute-1 python3.9[89272]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:30:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:30:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:20.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:30:20 compute-1 systemd[83435]: Starting Mark boot as successful...
Nov 24 09:30:20 compute-1 systemd[83435]: Finished Mark boot as successful.
Nov 24 09:30:20 compute-1 ceph-mon[80009]: pgmap v43: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:20 compute-1 ceph-mon[80009]: osdmap e101: 3 total, 3 up, 3 in
Nov 24 09:30:20 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e102 e102: 3 total, 3 up, 3 in
Nov 24 09:30:20 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 102 pg[9.f( v 45'1130 (0'0,45'1130] local-lis/les=101/102 n=7 ec=54/39 lis/c=99/66 les/c/f=100/67/0 sis=101) [1] r=0 lpr=101 pi=[66,101)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:30:20 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 102 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=101/102 n=5 ec=54/39 lis/c=99/66 les/c/f=100/67/0 sis=101) [1] r=0 lpr=101 pi=[66,101)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:30:20 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:30:20 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:30:20 compute-1 sudo[89302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:30:20 compute-1 sudo[89302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:20 compute-1 sudo[89302]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:20 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:20 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:21 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:21 compute-1 sudo[89452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaxyjqogqkrxsitchiutwqvparryabyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976620.8072042-94-248606319644631/AnsiballZ_command.py'
Nov 24 09:30:21 compute-1 sudo[89452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:30:21 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Nov 24 09:30:21 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 24 09:30:21 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.1f deep-scrub starts
Nov 24 09:30:21 compute-1 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.1f deep-scrub ok
Nov 24 09:30:21 compute-1 ceph-mon[80009]: osdmap e102: 3 total, 3 up, 3 in
Nov 24 09:30:21 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:21 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:21 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 24 09:30:21 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 24 09:30:21 compute-1 python3.9[89454]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:30:21 compute-1 sudo[89452]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:21 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:21 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e103 e103: 3 total, 3 up, 3 in
Nov 24 09:30:21 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 103 pg[9.10( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=103) [1] r=0 lpr=103 pi=[54,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:21.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:22.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:22 compute-1 sudo[89606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-culbrqrwjsmwligvaiecmonxcoupumyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976621.9199746-130-146868798628472/AnsiballZ_stat.py'
Nov 24 09:30:22 compute-1 sudo[89606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:30:22 compute-1 ceph-mon[80009]: pgmap v46: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 24 09:30:22 compute-1 ceph-mon[80009]: 9.1f deep-scrub starts
Nov 24 09:30:22 compute-1 ceph-mon[80009]: 9.1f deep-scrub ok
Nov 24 09:30:22 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 24 09:30:22 compute-1 ceph-mon[80009]: osdmap e103: 3 total, 3 up, 3 in
Nov 24 09:30:22 compute-1 python3.9[89608]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:30:22 compute-1 sudo[89606]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:22 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e104 e104: 3 total, 3 up, 3 in
Nov 24 09:30:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 104 pg[9.10( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=104) [1]/[0] r=-1 lpr=104 pi=[54,104)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 104 pg[9.10( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=104) [1]/[0] r=-1 lpr=104 pi=[54,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:30:22 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:30:22 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:22 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:23 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:23 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Nov 24 09:30:23 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 24 09:30:23 compute-1 sudo[89760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qacumvacglynyizxzdtsywvtudgicpvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976622.8651295-163-120872442273521/AnsiballZ_file.py'
Nov 24 09:30:23 compute-1 sudo[89760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:30:23 compute-1 python3.9[89762]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:30:23 compute-1 sudo[89760]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:23 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:23 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e105 e105: 3 total, 3 up, 3 in
Nov 24 09:30:23 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 105 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=105) [1] r=0 lpr=105 pi=[54,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:23 compute-1 ceph-mon[80009]: osdmap e104: 3 total, 3 up, 3 in
Nov 24 09:30:23 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 24 09:30:23 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 24 09:30:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:23.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:24 compute-1 sudo[89913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogaurirwwzovctwyzdqfzykrlvvqxxnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976623.813162-190-142619203133772/AnsiballZ_file.py'
Nov 24 09:30:24 compute-1 sudo[89913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:30:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:30:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:24.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:30:24 compute-1 python3.9[89915]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:30:24 compute-1 sudo[89913]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e106 e106: 3 total, 3 up, 3 in
Nov 24 09:30:24 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 106 pg[9.10( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=2 ec=54/39 lis/c=104/54 les/c/f=105/55/0 sis=106) [1] r=0 lpr=106 pi=[54,106)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:24 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 106 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=106) [1]/[0] r=-1 lpr=106 pi=[54,106)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:24 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 106 pg[9.10( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=2 ec=54/39 lis/c=104/54 les/c/f=105/55/0 sis=106) [1] r=0 lpr=106 pi=[54,106)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:24 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 106 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=106) [1]/[0] r=-1 lpr=106 pi=[54,106)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:30:24 compute-1 ceph-mon[80009]: pgmap v49: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:24 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 24 09:30:24 compute-1 ceph-mon[80009]: osdmap e105: 3 total, 3 up, 3 in
Nov 24 09:30:24 compute-1 ceph-mon[80009]: osdmap e106: 3 total, 3 up, 3 in
Nov 24 09:30:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:24 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:25 compute-1 python3.9[90065]: ansible-ansible.builtin.service_facts Invoked
Nov 24 09:30:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:25 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:25 compute-1 network[90082]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 09:30:25 compute-1 network[90083]: 'network-scripts' will be removed from distribution in near future.
Nov 24 09:30:25 compute-1 network[90084]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 09:30:25 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Nov 24 09:30:25 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 24 09:30:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:25 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:25 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e107 e107: 3 total, 3 up, 3 in
Nov 24 09:30:25 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 107 pg[9.10( v 45'1130 (0'0,45'1130] local-lis/les=106/107 n=2 ec=54/39 lis/c=104/54 les/c/f=105/55/0 sis=106) [1] r=0 lpr=106 pi=[54,106)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:30:25 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 107 pg[9.12( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=107) [1] r=0 lpr=107 pi=[54,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:25 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 24 09:30:25 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 24 09:30:25 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 24 09:30:25 compute-1 ceph-mon[80009]: osdmap e107: 3 total, 3 up, 3 in
Nov 24 09:30:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:30:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:25.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:30:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:26.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:26 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e108 e108: 3 total, 3 up, 3 in
Nov 24 09:30:26 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 108 pg[9.11( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=5 ec=54/39 lis/c=106/54 les/c/f=107/55/0 sis=108) [1] r=0 lpr=108 pi=[54,108)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:26 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 108 pg[9.11( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=5 ec=54/39 lis/c=106/54 les/c/f=107/55/0 sis=108) [1] r=0 lpr=108 pi=[54,108)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:26 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 108 pg[9.12( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=108) [1]/[0] r=-1 lpr=108 pi=[54,108)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:26 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 108 pg[9.12( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=108) [1]/[0] r=-1 lpr=108 pi=[54,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:30:26 compute-1 ceph-mon[80009]: pgmap v52: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:26 compute-1 ceph-mon[80009]: osdmap e108: 3 total, 3 up, 3 in
Nov 24 09:30:26 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:26 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:27 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:27 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e109 e109: 3 total, 3 up, 3 in
Nov 24 09:30:27 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 109 pg[9.11( v 45'1130 (0'0,45'1130] local-lis/les=108/109 n=5 ec=54/39 lis/c=106/54 les/c/f=107/55/0 sis=108) [1] r=0 lpr=108 pi=[54,108)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:30:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:30:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:30:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:27.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:30:28 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e110 e110: 3 total, 3 up, 3 in
Nov 24 09:30:28 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 110 pg[9.12( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=4 ec=54/39 lis/c=108/54 les/c/f=109/55/0 sis=110) [1] r=0 lpr=110 pi=[54,110)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:28 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 110 pg[9.12( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=4 ec=54/39 lis/c=108/54 les/c/f=109/55/0 sis=110) [1] r=0 lpr=110 pi=[54,110)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:28.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:28 compute-1 ceph-mon[80009]: pgmap v55: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:28 compute-1 ceph-mon[80009]: osdmap e109: 3 total, 3 up, 3 in
Nov 24 09:30:28 compute-1 ceph-mon[80009]: osdmap e110: 3 total, 3 up, 3 in
Nov 24 09:30:28 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:28 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e111 e111: 3 total, 3 up, 3 in
Nov 24 09:30:29 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 111 pg[9.12( v 45'1130 (0'0,45'1130] local-lis/les=110/111 n=4 ec=54/39 lis/c=108/54 les/c/f=109/55/0 sis=110) [1] r=0 lpr=110 pi=[54,110)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:30:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:29 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:29 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:29 compute-1 python3.9[90346]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:30:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:30:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:29.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:30:30 compute-1 ceph-mon[80009]: osdmap e111: 3 total, 3 up, 3 in
Nov 24 09:30:30 compute-1 ceph-mon[80009]: pgmap v59: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:30:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:30.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:30:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:30:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:30:30 compute-1 python3.9[90497]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:30:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:31 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:30:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:31 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:31 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Nov 24 09:30:31 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 24 09:30:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:31 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:30:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:31.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:30:31 compute-1 python3.9[90652]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:30:32 compute-1 ceph-mon[80009]: pgmap v60: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 445 B/s rd, 0 op/s; 23 B/s, 0 objects/s recovering
Nov 24 09:30:32 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 24 09:30:32 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 24 09:30:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e112 e112: 3 total, 3 up, 3 in
Nov 24 09:30:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:32.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:30:32 compute-1 sudo[90809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdzyvnxotexejfsrzkvclkhcndrpmaha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976632.65687-334-248322141763878/AnsiballZ_setup.py'
Nov 24 09:30:32 compute-1 sudo[90809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:30:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:33 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:33 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 24 09:30:33 compute-1 ceph-mon[80009]: osdmap e112: 3 total, 3 up, 3 in
Nov 24 09:30:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:33 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:33 compute-1 python3.9[90811]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:30:33 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Nov 24 09:30:33 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 24 09:30:33 compute-1 sudo[90809]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:33 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:33.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:33 compute-1 sudo[90894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zepekisxefrfemrplmeqddgxgspyheqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976632.65687-334-248322141763878/AnsiballZ_dnf.py'
Nov 24 09:30:33 compute-1 sudo[90894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:30:34 compute-1 python3.9[90896]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:30:34 compute-1 ceph-mon[80009]: pgmap v62: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 366 B/s rd, 0 op/s; 19 B/s, 0 objects/s recovering
Nov 24 09:30:34 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 24 09:30:34 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 24 09:30:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:34.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e113 e113: 3 total, 3 up, 3 in
Nov 24 09:30:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:35 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:35 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 24 09:30:35 compute-1 ceph-mon[80009]: osdmap e113: 3 total, 3 up, 3 in
Nov 24 09:30:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:35 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:35 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Nov 24 09:30:35 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 24 09:30:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:35 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:35.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:30:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:36.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:30:36 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e114 e114: 3 total, 3 up, 3 in
Nov 24 09:30:36 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 114 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=70/70 les/c/f=71/71/0 sis=114) [1] r=0 lpr=114 pi=[70,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:36 compute-1 ceph-mon[80009]: pgmap v64: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 330 B/s rd, 0 op/s; 17 B/s, 0 objects/s recovering
Nov 24 09:30:36 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 24 09:30:36 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 24 09:30:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:37 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:37 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:37 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e115 e115: 3 total, 3 up, 3 in
Nov 24 09:30:37 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 115 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=70/70 les/c/f=71/71/0 sis=115) [1]/[2] r=-1 lpr=115 pi=[70,115)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:37 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 115 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=70/70 les/c/f=71/71/0 sis=115) [1]/[2] r=-1 lpr=115 pi=[70,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:30:37 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 24 09:30:37 compute-1 ceph-mon[80009]: osdmap e114: 3 total, 3 up, 3 in
Nov 24 09:30:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:37 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:37 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:30:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:37.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:38.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:38 compute-1 ceph-mon[80009]: osdmap e115: 3 total, 3 up, 3 in
Nov 24 09:30:38 compute-1 ceph-mon[80009]: pgmap v67: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 199 B/s rd, 0 op/s
Nov 24 09:30:38 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e116 e116: 3 total, 3 up, 3 in
Nov 24 09:30:38 compute-1 sudo[90968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:30:38 compute-1 sudo[90968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:39 compute-1 sudo[90968]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:39 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:39 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:39 compute-1 ceph-mon[80009]: osdmap e116: 3 total, 3 up, 3 in
Nov 24 09:30:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e117 e117: 3 total, 3 up, 3 in
Nov 24 09:30:39 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 117 pg[9.15( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=4 ec=54/39 lis/c=115/70 les/c/f=116/71/0 sis=117) [1] r=0 lpr=117 pi=[70,117)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:39 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 117 pg[9.15( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=4 ec=54/39 lis/c=115/70 les/c/f=116/71/0 sis=117) [1] r=0 lpr=117 pi=[70,117)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:39 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:30:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:39.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:30:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:30:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:40.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:30:40 compute-1 ceph-mon[80009]: pgmap v69: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 201 B/s rd, 0 op/s
Nov 24 09:30:40 compute-1 ceph-mon[80009]: osdmap e117: 3 total, 3 up, 3 in
Nov 24 09:30:40 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e118 e118: 3 total, 3 up, 3 in
Nov 24 09:30:40 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 118 pg[9.15( v 45'1130 (0'0,45'1130] local-lis/les=117/118 n=4 ec=54/39 lis/c=115/70 les/c/f=116/71/0 sis=117) [1] r=0 lpr=117 pi=[70,117)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:30:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:41 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:41 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:41 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Nov 24 09:30:41 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 24 09:30:41 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e119 e119: 3 total, 3 up, 3 in
Nov 24 09:30:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:41 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:41 compute-1 ceph-mon[80009]: osdmap e118: 3 total, 3 up, 3 in
Nov 24 09:30:41 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 24 09:30:41 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 24 09:30:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:41.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:30:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:42.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:30:42 compute-1 ceph-mon[80009]: pgmap v72: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 504 B/s rd, 0 op/s; 54 B/s, 1 objects/s recovering
Nov 24 09:30:42 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 24 09:30:42 compute-1 ceph-mon[80009]: osdmap e119: 3 total, 3 up, 3 in
Nov 24 09:30:42 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:30:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 119 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=75/76 n=4 ec=54/39 lis/c=75/75 les/c/f=76/76/0 sis=119 pruub=12.948238373s) [2] r=-1 lpr=119 pi=[75,119)/1 crt=45'1130 mlcod 0'0 active pruub 262.337310791s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:42 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 119 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=75/76 n=4 ec=54/39 lis/c=75/75 les/c/f=76/76/0 sis=119 pruub=12.948196411s) [2] r=-1 lpr=119 pi=[75,119)/1 crt=45'1130 mlcod 0'0 unknown NOTIFY pruub 262.337310791s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:30:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:43 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:43 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e120 e120: 3 total, 3 up, 3 in
Nov 24 09:30:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 120 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=75/76 n=4 ec=54/39 lis/c=75/75 les/c/f=76/76/0 sis=120) [2]/[1] r=0 lpr=120 pi=[75,120)/1 crt=45'1130 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:43 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 120 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=75/76 n=4 ec=54/39 lis/c=75/75 les/c/f=76/76/0 sis=120) [2]/[1] r=0 lpr=120 pi=[75,120)/1 crt=45'1130 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:43 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8001ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:43 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Nov 24 09:30:43 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 24 09:30:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:43 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:30:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:43.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:30:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:44.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:44 compute-1 ceph-mon[80009]: osdmap e120: 3 total, 3 up, 3 in
Nov 24 09:30:44 compute-1 ceph-mon[80009]: pgmap v75: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 54 B/s, 1 objects/s recovering
Nov 24 09:30:44 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 24 09:30:44 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 24 09:30:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:45 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:45 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Nov 24 09:30:45 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 24 09:30:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:30:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:30:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e121 e121: 3 total, 3 up, 3 in
Nov 24 09:30:45 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 121 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=120/121 n=4 ec=54/39 lis/c=75/75 les/c/f=76/76/0 sis=120) [2]/[1] async=[2] r=0 lpr=120 pi=[75,120)/1 crt=45'1130 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:30:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:45 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:45.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:46.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:46 compute-1 ceph-mon[80009]: pgmap v76: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 36 B/s, 1 objects/s recovering
Nov 24 09:30:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 24 09:30:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:30:46 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 24 09:30:46 compute-1 ceph-mon[80009]: osdmap e121: 3 total, 3 up, 3 in
Nov 24 09:30:46 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 24 09:30:46 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e122 e122: 3 total, 3 up, 3 in
Nov 24 09:30:46 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 122 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=120/121 n=4 ec=54/39 lis/c=120/75 les/c/f=121/76/0 sis=122 pruub=14.998853683s) [2] async=[2] r=-1 lpr=122 pi=[75,122)/1 crt=45'1130 mlcod 45'1130 active pruub 268.048889160s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:46 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 122 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=120/121 n=4 ec=54/39 lis/c=120/75 les/c/f=121/76/0 sis=122 pruub=14.998687744s) [2] r=-1 lpr=122 pi=[75,122)/1 crt=45'1130 mlcod 0'0 unknown NOTIFY pruub 268.048889160s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:30:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:47 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:47 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:47 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Nov 24 09:30:47 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 24 09:30:47 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e123 e123: 3 total, 3 up, 3 in
Nov 24 09:30:47 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 24 09:30:47 compute-1 ceph-mon[80009]: osdmap e122: 3 total, 3 up, 3 in
Nov 24 09:30:47 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 24 09:30:47 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 24 09:30:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:47 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:47 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:30:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:30:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:47.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:30:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:30:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:48.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:30:48 compute-1 ceph-mon[80009]: pgmap v79: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 176 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Nov 24 09:30:48 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 24 09:30:48 compute-1 ceph-mon[80009]: osdmap e123: 3 total, 3 up, 3 in
Nov 24 09:30:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:49 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:49 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Nov 24 09:30:49 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 24 09:30:49 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 24 09:30:49 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 24 09:30:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:49 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e124 e124: 3 total, 3 up, 3 in
Nov 24 09:30:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:30:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:49.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:30:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:50.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:50 compute-1 ceph-mon[80009]: pgmap v81: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Nov 24 09:30:50 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 24 09:30:50 compute-1 ceph-mon[80009]: osdmap e124: 3 total, 3 up, 3 in
Nov 24 09:30:50 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e125 e125: 3 total, 3 up, 3 in
Nov 24 09:30:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:51 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:51 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:51 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:51 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e126 e126: 3 total, 3 up, 3 in
Nov 24 09:30:51 compute-1 ceph-mon[80009]: osdmap e125: 3 total, 3 up, 3 in
Nov 24 09:30:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:30:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:51.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:30:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:52.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e127 e127: 3 total, 3 up, 3 in
Nov 24 09:30:52 compute-1 ceph-mon[80009]: pgmap v84: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 429 B/s rd, 0 op/s
Nov 24 09:30:52 compute-1 ceph-mon[80009]: osdmap e126: 3 total, 3 up, 3 in
Nov 24 09:30:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:30:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:53 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:53 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:53 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:53 compute-1 ceph-mon[80009]: osdmap e127: 3 total, 3 up, 3 in
Nov 24 09:30:53 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e128 e128: 3 total, 3 up, 3 in
Nov 24 09:30:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:30:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:53.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:30:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:54.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:54 compute-1 ceph-mon[80009]: pgmap v87: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 24 09:30:54 compute-1 ceph-mon[80009]: osdmap e128: 3 total, 3 up, 3 in
Nov 24 09:30:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:55 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:55 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:55 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:30:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:55.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:30:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:56.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:56 compute-1 ceph-mon[80009]: pgmap v89: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 440 B/s rd, 0 op/s
Nov 24 09:30:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:57 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:57 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Nov 24 09:30:57 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 24 09:30:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:57 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:57 compute-1 ceph-mon[80009]: pgmap v90: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Nov 24 09:30:57 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 24 09:30:57 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 24 09:30:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e129 e129: 3 total, 3 up, 3 in
Nov 24 09:30:57 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 129 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=90/91 n=4 ec=54/39 lis/c=90/90 les/c/f=91/91/0 sis=129 pruub=10.057298660s) [0] r=-1 lpr=129 pi=[90,129)/1 crt=45'1130 mlcod 0'0 active pruub 274.349548340s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:57 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 129 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=90/91 n=4 ec=54/39 lis/c=90/90 les/c/f=91/91/0 sis=129 pruub=10.057257652s) [0] r=-1 lpr=129 pi=[90,129)/1 crt=45'1130 mlcod 0'0 unknown NOTIFY pruub 274.349548340s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:30:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:30:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:30:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:57.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:30:58 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e130 e130: 3 total, 3 up, 3 in
Nov 24 09:30:58 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 130 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=90/91 n=4 ec=54/39 lis/c=90/90 les/c/f=91/91/0 sis=130) [0]/[1] r=0 lpr=130 pi=[90,130)/1 crt=45'1130 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:58 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 130 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=90/91 n=4 ec=54/39 lis/c=90/90 les/c/f=91/91/0 sis=130) [0]/[1] r=0 lpr=130 pi=[90,130)/1 crt=45'1130 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:58.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:58 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 24 09:30:58 compute-1 ceph-mon[80009]: osdmap e129: 3 total, 3 up, 3 in
Nov 24 09:30:58 compute-1 ceph-mon[80009]: osdmap e130: 3 total, 3 up, 3 in
Nov 24 09:30:59 compute-1 sudo[91073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:30:59 compute-1 sudo[91073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:59 compute-1 sudo[91073]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:59 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:59 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e131 e131: 3 total, 3 up, 3 in
Nov 24 09:30:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Nov 24 09:30:59 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 09:30:59 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 131 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=130/131 n=4 ec=54/39 lis/c=90/90 les/c/f=91/91/0 sis=130) [0]/[1] async=[0] r=0 lpr=130 pi=[90,130)/1 crt=45'1130 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:30:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:59 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:30:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:59.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:31:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:00.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:31:00 compute-1 ceph-mon[80009]: osdmap e131: 3 total, 3 up, 3 in
Nov 24 09:31:00 compute-1 ceph-mon[80009]: pgmap v94: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 183 B/s rd, 0 op/s; 19 B/s, 1 objects/s recovering
Nov 24 09:31:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 09:31:00 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 09:31:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e132 e132: 3 total, 3 up, 3 in
Nov 24 09:31:00 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 132 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=130/131 n=4 ec=54/39 lis/c=130/90 les/c/f=131/91/0 sis=132 pruub=15.353184700s) [0] async=[0] r=-1 lpr=132 pi=[90,132)/1 crt=45'1130 mlcod 45'1130 active pruub 282.139495850s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:31:00 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 132 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=130/131 n=4 ec=54/39 lis/c=130/90 les/c/f=131/91/0 sis=132 pruub=15.352953911s) [0] r=-1 lpr=132 pi=[90,132)/1 crt=45'1130 mlcod 0'0 unknown NOTIFY pruub 282.139495850s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:31:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:31:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:31:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:01 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:01 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e133 e133: 3 total, 3 up, 3 in
Nov 24 09:31:01 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 09:31:01 compute-1 ceph-mon[80009]: osdmap e132: 3 total, 3 up, 3 in
Nov 24 09:31:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:31:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:01 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0004020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:31:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:01.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:31:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:31:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:02.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:31:02 compute-1 ceph-mon[80009]: osdmap e133: 3 total, 3 up, 3 in
Nov 24 09:31:02 compute-1 ceph-mon[80009]: pgmap v97: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 579 B/s rd, 0 op/s; 31 B/s, 1 objects/s recovering
Nov 24 09:31:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e134 e134: 3 total, 3 up, 3 in
Nov 24 09:31:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:31:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:03 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:03 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e135 e135: 3 total, 3 up, 3 in
Nov 24 09:31:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:03 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:03 compute-1 ceph-mon[80009]: osdmap e134: 3 total, 3 up, 3 in
Nov 24 09:31:03 compute-1 ceph-mon[80009]: osdmap e135: 3 total, 3 up, 3 in
Nov 24 09:31:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:03 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:03.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e136 e136: 3 total, 3 up, 3 in
Nov 24 09:31:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:31:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:04.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:31:04 compute-1 ceph-mon[80009]: pgmap v100: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Nov 24 09:31:04 compute-1 ceph-mon[80009]: osdmap e136: 3 total, 3 up, 3 in
Nov 24 09:31:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:05 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0004020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:05 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:05 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:05.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:06.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:06 compute-1 ceph-mon[80009]: pgmap v102: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 507 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Nov 24 09:31:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:07 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:07 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0004020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:07 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Nov 24 09:31:07 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 24 09:31:07 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e137 e137: 3 total, 3 up, 3 in
Nov 24 09:31:07 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 24 09:31:07 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 24 09:31:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:07 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:07 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:31:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:07.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:08.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:08 compute-1 ceph-mon[80009]: pgmap v103: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Nov 24 09:31:08 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 24 09:31:08 compute-1 ceph-mon[80009]: osdmap e137: 3 total, 3 up, 3 in
Nov 24 09:31:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:09 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:09 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Nov 24 09:31:09 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 24 09:31:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e138 e138: 3 total, 3 up, 3 in
Nov 24 09:31:09 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 138 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=97/98 n=5 ec=54/39 lis/c=97/97 les/c/f=98/98/0 sis=138 pruub=10.914296150s) [2] r=-1 lpr=138 pi=[97,138)/1 crt=45'1130 mlcod 0'0 active pruub 286.821411133s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:31:09 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 138 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=97/98 n=5 ec=54/39 lis/c=97/97 les/c/f=98/98/0 sis=138 pruub=10.914259911s) [2] r=-1 lpr=138 pi=[97,138)/1 crt=45'1130 mlcod 0'0 unknown NOTIFY pruub 286.821411133s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:31:09 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 24 09:31:09 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 24 09:31:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:09 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0004020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:09.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:31:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:10.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:31:10 compute-1 ceph-mon[80009]: pgmap v105: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 165 B/s rd, 0 op/s; 17 B/s, 0 objects/s recovering
Nov 24 09:31:10 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 24 09:31:10 compute-1 ceph-mon[80009]: osdmap e138: 3 total, 3 up, 3 in
Nov 24 09:31:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e139 e139: 3 total, 3 up, 3 in
Nov 24 09:31:10 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 139 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=97/98 n=5 ec=54/39 lis/c=97/97 les/c/f=98/98/0 sis=139) [2]/[1] r=0 lpr=139 pi=[97,139)/1 crt=45'1130 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:31:10 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 139 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=97/98 n=5 ec=54/39 lis/c=97/97 les/c/f=98/98/0 sis=139) [2]/[1] r=0 lpr=139 pi=[97,139)/1 crt=45'1130 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:31:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:11 compute-1 ceph-mon[80009]: osdmap e139: 3 total, 3 up, 3 in
Nov 24 09:31:11 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e140 e140: 3 total, 3 up, 3 in
Nov 24 09:31:11 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 140 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=139/140 n=5 ec=54/39 lis/c=97/97 les/c/f=98/98/0 sis=139) [2]/[1] async=[2] r=0 lpr=139 pi=[97,139)/1 crt=45'1130 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:31:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:31:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:11.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:31:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:12.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:12 compute-1 ceph-mon[80009]: pgmap v108: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Nov 24 09:31:12 compute-1 ceph-mon[80009]: osdmap e140: 3 total, 3 up, 3 in
Nov 24 09:31:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e141 e141: 3 total, 3 up, 3 in
Nov 24 09:31:12 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 141 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=139/140 n=5 ec=54/39 lis/c=139/97 les/c/f=140/98/0 sis=141 pruub=14.936669350s) [2] async=[2] r=-1 lpr=141 pi=[97,141)/1 crt=45'1130 mlcod 45'1130 active pruub 293.978240967s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:31:12 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 141 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=139/140 n=5 ec=54/39 lis/c=139/97 les/c/f=140/98/0 sis=141 pruub=14.936123848s) [2] r=-1 lpr=141 pi=[97,141)/1 crt=45'1130 mlcod 0'0 unknown NOTIFY pruub 293.978240967s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:31:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:31:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:13 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:13 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:13 compute-1 ceph-mon[80009]: osdmap e141: 3 total, 3 up, 3 in
Nov 24 09:31:13 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e142 e142: 3 total, 3 up, 3 in
Nov 24 09:31:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:13 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:13.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:14 compute-1 sudo[90894]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:14.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:14 compute-1 ceph-mon[80009]: pgmap v111: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 24 09:31:14 compute-1 ceph-mon[80009]: osdmap e142: 3 total, 3 up, 3 in
Nov 24 09:31:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:15 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:15 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:31:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:31:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:31:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:15 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:15 compute-1 sudo[91262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujepdczejanwtibmrfipgctipcwwiobj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976675.672212-370-173170142389027/AnsiballZ_command.py'
Nov 24 09:31:15 compute-1 sudo[91262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:31:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:15.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:31:16 compute-1 python3.9[91264]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:31:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:16.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:16 compute-1 ceph-mon[80009]: pgmap v113: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 419 B/s rd, 0 op/s
Nov 24 09:31:16 compute-1 sudo[91262]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:17 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:17 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:17 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Nov 24 09:31:17 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 24 09:31:17 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 24 09:31:17 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 24 09:31:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:17 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:17 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e143 e143: 3 total, 3 up, 3 in
Nov 24 09:31:17 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 143 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=75/76 n=5 ec=54/39 lis/c=75/75 les/c/f=76/76/0 sis=143 pruub=10.135750771s) [0] r=-1 lpr=143 pi=[75,143)/1 crt=45'1130 mlcod 0'0 active pruub 294.338439941s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:31:17 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 143 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=75/76 n=5 ec=54/39 lis/c=75/75 les/c/f=76/76/0 sis=143 pruub=10.135714531s) [0] r=-1 lpr=143 pi=[75,143)/1 crt=45'1130 mlcod 0'0 unknown NOTIFY pruub 294.338439941s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:31:17 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:31:17 compute-1 sudo[91550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ancxyeswnfrxzregrfwcuojpauczcevh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976677.3539-394-155966625886843/AnsiballZ_selinux.py'
Nov 24 09:31:17 compute-1 sudo[91550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:17.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:18 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e144 e144: 3 total, 3 up, 3 in
Nov 24 09:31:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 144 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=75/76 n=5 ec=54/39 lis/c=75/75 les/c/f=76/76/0 sis=144) [0]/[1] r=0 lpr=144 pi=[75,144)/1 crt=45'1130 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:31:18 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 144 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=75/76 n=5 ec=54/39 lis/c=75/75 les/c/f=76/76/0 sis=144) [0]/[1] r=0 lpr=144 pi=[75,144)/1 crt=45'1130 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:31:18 compute-1 python3.9[91552]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 24 09:31:18 compute-1 sudo[91550]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:31:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:18.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:31:18 compute-1 ceph-mon[80009]: pgmap v114: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 511 B/s wr, 0 op/s; 36 B/s, 0 objects/s recovering
Nov 24 09:31:18 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 24 09:31:18 compute-1 ceph-mon[80009]: osdmap e143: 3 total, 3 up, 3 in
Nov 24 09:31:18 compute-1 ceph-mon[80009]: mgrmap e32: compute-0.mauvni(active, since 92s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:31:18 compute-1 ceph-mon[80009]: osdmap e144: 3 total, 3 up, 3 in
Nov 24 09:31:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:19 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:19 compute-1 sudo[91702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfwydjliirzausqtupbrcokkcidmudcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976678.808648-427-19239402069509/AnsiballZ_command.py'
Nov 24 09:31:19 compute-1 sudo[91702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e145 e145: 3 total, 3 up, 3 in
Nov 24 09:31:19 compute-1 sudo[91705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:31:19 compute-1 sudo[91705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:31:19 compute-1 sudo[91705]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:19 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:19 compute-1 python3.9[91704]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 24 09:31:19 compute-1 sudo[91702]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 24 09:31:19 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:31:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:19 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:19 compute-1 sudo[91880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdiljfxqgtfujqhznwuhyjwesvkhwuba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976679.5108519-451-149298633878888/AnsiballZ_file.py'
Nov 24 09:31:19 compute-1 sudo[91880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:19 compute-1 python3.9[91882]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:31:19 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 145 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=144/145 n=5 ec=54/39 lis/c=75/75 les/c/f=76/76/0 sis=144) [0]/[1] async=[0] r=0 lpr=144 pi=[75,144)/1 crt=45'1130 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:31:19 compute-1 sudo[91880]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:19.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:20 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e146 e146: 3 total, 3 up, 3 in
Nov 24 09:31:20 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 146 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=144/145 n=5 ec=54/39 lis/c=144/75 les/c/f=145/76/0 sis=146 pruub=15.777785301s) [0] async=[0] r=-1 lpr=146 pi=[75,146)/1 crt=45'1130 mlcod 45'1130 active pruub 302.440887451s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:31:20 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 146 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=101/102 n=5 ec=54/39 lis/c=101/101 les/c/f=102/102/0 sis=146 pruub=12.317090034s) [0] r=-1 lpr=146 pi=[101,146)/1 crt=45'1130 mlcod 0'0 active pruub 298.980194092s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:31:20 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 146 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=101/102 n=5 ec=54/39 lis/c=101/101 les/c/f=102/102/0 sis=146 pruub=12.317038536s) [0] r=-1 lpr=146 pi=[101,146)/1 crt=45'1130 mlcod 0'0 unknown NOTIFY pruub 298.980194092s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:31:20 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 146 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=144/145 n=5 ec=54/39 lis/c=144/75 les/c/f=145/76/0 sis=146 pruub=15.777721405s) [0] r=-1 lpr=146 pi=[75,146)/1 crt=45'1130 mlcod 0'0 unknown NOTIFY pruub 302.440887451s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:31:20 compute-1 ceph-mon[80009]: osdmap e145: 3 total, 3 up, 3 in
Nov 24 09:31:20 compute-1 ceph-mon[80009]: pgmap v118: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 177 B/s rd, 532 B/s wr, 0 op/s; 38 B/s, 0 objects/s recovering
Nov 24 09:31:20 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:31:20 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:31:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:20.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:20 compute-1 sudo[92033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmidmdpdmqrgffglgxlnunqhpxthsfjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976680.3086424-475-245410968796928/AnsiballZ_mount.py'
Nov 24 09:31:20 compute-1 sudo[92033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:20 compute-1 python3.9[92035]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 24 09:31:20 compute-1 sudo[92033]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:21 compute-1 sudo[92037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:31:21 compute-1 sudo[92037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:31:21 compute-1 sudo[92037]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:21 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:21 compute-1 sudo[92085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:31:21 compute-1 sudo[92085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:31:21 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e147 e147: 3 total, 3 up, 3 in
Nov 24 09:31:21 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 147 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=101/102 n=5 ec=54/39 lis/c=101/101 les/c/f=102/102/0 sis=147) [0]/[1] r=0 lpr=147 pi=[101,147)/1 crt=45'1130 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:31:21 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 147 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=101/102 n=5 ec=54/39 lis/c=101/101 les/c/f=102/102/0 sis=147) [0]/[1] r=0 lpr=147 pi=[101,147)/1 crt=45'1130 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:31:21 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:31:21 compute-1 ceph-mon[80009]: osdmap e146: 3 total, 3 up, 3 in
Nov 24 09:31:21 compute-1 ceph-mon[80009]: osdmap e147: 3 total, 3 up, 3 in
Nov 24 09:31:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:21 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:21 compute-1 sudo[92085]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:21 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:21.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:22 compute-1 sudo[92266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djopbcrwzcgmkrjekberaufucbqpkdrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976681.90125-559-263331577166173/AnsiballZ_file.py'
Nov 24 09:31:22 compute-1 sudo[92266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:22 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e148 e148: 3 total, 3 up, 3 in
Nov 24 09:31:22 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 148 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=147/148 n=5 ec=54/39 lis/c=101/101 les/c/f=102/102/0 sis=147) [0]/[1] async=[0] r=0 lpr=147 pi=[101,147)/1 crt=45'1130 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:31:22 compute-1 ceph-mon[80009]: pgmap v121: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 564 B/s rd, 0 op/s; 0 B/s, 1 objects/s recovering
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.226428) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976682226461, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 3118, "num_deletes": 251, "total_data_size": 10768040, "memory_usage": 11114304, "flush_reason": "Manual Compaction"}
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976682249513, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 6760016, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7636, "largest_seqno": 10749, "table_properties": {"data_size": 6746169, "index_size": 8997, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3781, "raw_key_size": 33429, "raw_average_key_size": 22, "raw_value_size": 6716526, "raw_average_value_size": 4474, "num_data_blocks": 390, "num_entries": 1501, "num_filter_entries": 1501, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976580, "oldest_key_time": 1763976580, "file_creation_time": 1763976682, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 23123 microseconds, and 10391 cpu microseconds.
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.249549) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 6760016 bytes OK
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.249568) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.251344) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.251361) EVENT_LOG_v1 {"time_micros": 1763976682251357, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.251380) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 10753031, prev total WAL file size 10753031, number of live WAL files 2.
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.253336) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(6601KB)], [18(10MB)]
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976682253364, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 18193153, "oldest_snapshot_seqno": -1}
Nov 24 09:31:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:22.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:22 compute-1 python3.9[92268]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 4096 keys, 14280326 bytes, temperature: kUnknown
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976682319909, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 14280326, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14247589, "index_size": 21427, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 104614, "raw_average_key_size": 25, "raw_value_size": 14167170, "raw_average_value_size": 3458, "num_data_blocks": 918, "num_entries": 4096, "num_filter_entries": 4096, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763976682, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.320132) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 14280326 bytes
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.321474) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 273.1 rd, 214.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(6.4, 10.9 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(4.8) write-amplify(2.1) OK, records in: 4630, records dropped: 534 output_compression: NoCompression
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.321492) EVENT_LOG_v1 {"time_micros": 1763976682321483, "job": 8, "event": "compaction_finished", "compaction_time_micros": 66624, "compaction_time_cpu_micros": 28213, "output_level": 6, "num_output_files": 1, "total_output_size": 14280326, "num_input_records": 4630, "num_output_records": 4096, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976682322596, "job": 8, "event": "table_file_deletion", "file_number": 20}
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976682324535, "job": 8, "event": "table_file_deletion", "file_number": 18}
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.253260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.324592) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.324597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.324599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.324600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:31:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.324602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:31:22 compute-1 sudo[92266]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:22 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:31:22 compute-1 sudo[92418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbkujpczanycpbbxgzekgbwgfzwnrmbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976682.6975079-583-181330048355733/AnsiballZ_stat.py'
Nov 24 09:31:22 compute-1 sudo[92418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:23 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:23 compute-1 python3.9[92420]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:31:23 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e149 e149: 3 total, 3 up, 3 in
Nov 24 09:31:23 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 149 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=147/148 n=5 ec=54/39 lis/c=147/101 les/c/f=148/102/0 sis=149 pruub=15.038232803s) [0] async=[0] r=-1 lpr=149 pi=[101,149)/1 crt=45'1130 mlcod 45'1130 active pruub 304.703369141s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:31:23 compute-1 ceph-osd[77497]: osd.1 pg_epoch: 149 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=147/148 n=5 ec=54/39 lis/c=147/101 les/c/f=148/102/0 sis=149 pruub=15.037920952s) [0] r=-1 lpr=149 pi=[101,149)/1 crt=45'1130 mlcod 0'0 unknown NOTIFY pruub 304.703369141s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:31:23 compute-1 sudo[92418]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:23 compute-1 ceph-mon[80009]: osdmap e148: 3 total, 3 up, 3 in
Nov 24 09:31:23 compute-1 ceph-mon[80009]: osdmap e149: 3 total, 3 up, 3 in
Nov 24 09:31:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:23 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:23 compute-1 sudo[92496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkbnctvjzhbrsfxxgkvxyufxfybpscdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976682.6975079-583-181330048355733/AnsiballZ_file.py'
Nov 24 09:31:23 compute-1 sudo[92496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:23 compute-1 python3.9[92498]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:31:23 compute-1 sudo[92496]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:23 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:31:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:23.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:31:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 e150: 3 total, 3 up, 3 in
Nov 24 09:31:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:31:24 compute-1 ceph-mon[80009]: pgmap v124: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 0 B/s, 1 objects/s recovering
Nov 24 09:31:24 compute-1 ceph-mon[80009]: osdmap e150: 3 total, 3 up, 3 in
Nov 24 09:31:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:31:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:31:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:24.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:31:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:31:24 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:31:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:31:24 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:31:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:31:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:31:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:31:24 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:31:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:31:24 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:31:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:31:24 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:31:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:25 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:25 compute-1 sudo[92649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnxrdhrxuhqyldymqqlbfzcnqgbegtkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976684.9058225-646-10996682677994/AnsiballZ_stat.py'
Nov 24 09:31:25 compute-1 sudo[92649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:25 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:31:25 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:31:25 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:31:25 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:31:25 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:31:25 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:31:25 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:31:25 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:31:25 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:31:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:25 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:25 compute-1 python3.9[92651]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:31:25 compute-1 sudo[92649]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:25 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:25.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:26 compute-1 ceph-mon[80009]: pgmap v126: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 491 B/s rd, 0 op/s; 0 B/s, 1 objects/s recovering
Nov 24 09:31:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:26.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:26 compute-1 sudo[92804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otrpqwolcfrdjwxaulngqlrlixskbyfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976686.1392412-685-152613697400862/AnsiballZ_getent.py'
Nov 24 09:31:26 compute-1 sudo[92804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:26 compute-1 python3.9[92806]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 24 09:31:26 compute-1 sudo[92804]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:27 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:27 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:27 compute-1 sudo[92957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hioedborbcanvigcoqqzhjkpuaywzwtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976687.178594-715-277931940949589/AnsiballZ_getent.py'
Nov 24 09:31:27 compute-1 sudo[92957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:27 compute-1 python3.9[92959]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 24 09:31:27 compute-1 sudo[92957]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:27 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:31:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:27.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:31:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:28.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:31:28 compute-1 ceph-mon[80009]: pgmap v127: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Nov 24 09:31:28 compute-1 sudo[93111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faznnrwbaddudknhadjmsadcrbzuwkdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976688.0494287-739-99231947023307/AnsiballZ_group.py'
Nov 24 09:31:28 compute-1 sudo[93111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:28 compute-1 python3.9[93113]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 09:31:28 compute-1 sudo[93111]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:29 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:29 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:29 compute-1 sudo[93263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eecdppsgpkwkgomvbszvdihstcwmejom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976689.0966973-766-17395729307905/AnsiballZ_file.py'
Nov 24 09:31:29 compute-1 sudo[93263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:31:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:31:29 compute-1 python3.9[93265]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 24 09:31:29 compute-1 sudo[93263]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:29 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:29 compute-1 sudo[93290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:31:29 compute-1 sudo[93290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:31:29 compute-1 sudo[93290]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093129 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:31:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:31:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:29.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:31:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:30.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:31:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:31:30 compute-1 ceph-mon[80009]: pgmap v128: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 143 B/s rd, 0 op/s; 15 B/s, 0 objects/s recovering
Nov 24 09:31:30 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:31:30 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:31:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:31:30 compute-1 sudo[93441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piefhrosrfsqlepabjpofjcbamaevqlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976690.2464936-799-196672191730369/AnsiballZ_dnf.py'
Nov 24 09:31:30 compute-1 sudo[93441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:30 compute-1 python3.9[93443]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:31:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:31 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:31 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:31 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:31:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:31.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:31:32 compute-1 sudo[93441]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:32.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:32 compute-1 ceph-mon[80009]: pgmap v129: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 250 B/s rd, 0 op/s; 13 B/s, 0 objects/s recovering
Nov 24 09:31:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:31:32 compute-1 sudo[93595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjfhxajuxniwqwyiggqwxhfutkvncrjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976692.564457-823-236927427857120/AnsiballZ_file.py'
Nov 24 09:31:32 compute-1 sudo[93595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:33 compute-1 python3.9[93597]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:31:33 compute-1 sudo[93595]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:33 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:33 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:33 compute-1 sudo[93747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnednespssbvnzlcdfmrwabwtbaywcyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976693.368834-847-20023079015221/AnsiballZ_stat.py'
Nov 24 09:31:33 compute-1 sudo[93747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:33 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:33 compute-1 python3.9[93749]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:31:33 compute-1 sudo[93747]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:31:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:33.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:31:34 compute-1 sudo[93826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jatkjbmtnrvudusipntisufvwbzrgwmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976693.368834-847-20023079015221/AnsiballZ_file.py'
Nov 24 09:31:34 compute-1 sudo[93826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:34 compute-1 python3.9[93828]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:31:34 compute-1 sudo[93826]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:34.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:34 compute-1 ceph-mon[80009]: pgmap v130: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s; 10 B/s, 0 objects/s recovering
Nov 24 09:31:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:35 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:35 compute-1 sudo[93979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqzixyxywkgrqwxwdmsjvkwunkqzteig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976694.7845042-886-2151156014732/AnsiballZ_stat.py'
Nov 24 09:31:35 compute-1 sudo[93979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:35 compute-1 python3.9[93981]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:31:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:35 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003cf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:35 compute-1 sudo[93979]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:35 compute-1 sudo[94057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wixkeuprudhfzkcnwraaixeclvtkbsjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976694.7845042-886-2151156014732/AnsiballZ_file.py'
Nov 24 09:31:35 compute-1 sudo[94057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:35 compute-1 python3.9[94059]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:31:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:35 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:35 compute-1 sudo[94057]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:31:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:35.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:31:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:31:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:36.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:31:36 compute-1 ceph-mon[80009]: pgmap v131: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 183 B/s rd, 0 op/s; 9 B/s, 0 objects/s recovering
Nov 24 09:31:36 compute-1 sudo[94210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lddiwzsghfegmzofblhcjwsilsibdhoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976696.482145-931-108728110156055/AnsiballZ_dnf.py'
Nov 24 09:31:36 compute-1 sudo[94210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:37 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:37 compute-1 python3.9[94212]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:31:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:37 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:37 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003d10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:37 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:31:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:31:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:37.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:31:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:38.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:38 compute-1 sudo[94210]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:38 compute-1 ceph-mon[80009]: pgmap v132: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 9 B/s, 0 objects/s recovering
Nov 24 09:31:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:39 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:39 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:39 compute-1 sudo[94291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:31:39 compute-1 sudo[94291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:31:39 compute-1 sudo[94291]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:39 compute-1 python3.9[94389]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:31:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:39 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:39 : epoch 69242567 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:31:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:39.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:40.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:40 compute-1 python3.9[94542]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 24 09:31:40 compute-1 ceph-mon[80009]: pgmap v133: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:31:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:41 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003d30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:41 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:41 compute-1 python3.9[94692]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:31:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:41 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:31:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:41.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:31:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:31:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:42.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:31:42 compute-1 sudo[94843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpsuzdjonqfccrswpzxdulnmphmpwxrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976701.956022-1054-227969039242964/AnsiballZ_systemd.py'
Nov 24 09:31:42 compute-1 sudo[94843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:42 compute-1 ceph-mon[80009]: pgmap v134: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:31:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:42 : epoch 69242567 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:31:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:42 : epoch 69242567 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:31:42 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:31:42 compute-1 python3.9[94845]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:31:42 compute-1 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 24 09:31:42 compute-1 systemd[1]: tuned.service: Deactivated successfully.
Nov 24 09:31:42 compute-1 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 24 09:31:42 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 24 09:31:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:43 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:43 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 24 09:31:43 compute-1 sudo[94843]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:43 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:43 compute-1 sshd-session[94884]: error: kex_exchange_identification: read: Connection reset by peer
Nov 24 09:31:43 compute-1 sshd-session[94884]: Connection reset by 69.164.217.74 port 41357
Nov 24 09:31:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:43 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:31:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:43.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:31:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:44.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:44 compute-1 python3.9[95011]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 24 09:31:44 compute-1 ceph-mon[80009]: pgmap v135: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:31:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:45 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:45 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:31:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:31:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:45 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:45 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:31:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:45 : epoch 69242567 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:31:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:46.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:46.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:46 compute-1 ceph-mon[80009]: pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:31:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:47 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8002550 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:47 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003ef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:47 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:47 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:31:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:31:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:48.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:31:48 compute-1 sudo[95163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfgzygjwgsuebphxhtjesecxeguewpmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976707.8251383-1225-228049409319332/AnsiballZ_systemd.py'
Nov 24 09:31:48 compute-1 sudo[95163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:31:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:48.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:31:48 compute-1 python3.9[95165]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:31:48 compute-1 sudo[95163]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:48 compute-1 ceph-mon[80009]: pgmap v137: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:31:48 compute-1 sudo[95317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euzlsligjlbbwfsdlorvigddxahxxzhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976708.5704575-1225-148534202275296/AnsiballZ_systemd.py'
Nov 24 09:31:48 compute-1 sudo[95317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:49 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:49 compute-1 python3.9[95319]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:31:49 compute-1 sudo[95317]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:49 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8001ea0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:49 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003ef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:31:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:50.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:31:50 compute-1 sshd-session[88922]: Connection closed by 192.168.122.30 port 46418
Nov 24 09:31:50 compute-1 sshd-session[88919]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:31:50 compute-1 systemd[1]: session-38.scope: Deactivated successfully.
Nov 24 09:31:50 compute-1 systemd[1]: session-38.scope: Consumed 1min 404ms CPU time.
Nov 24 09:31:50 compute-1 systemd-logind[823]: Session 38 logged out. Waiting for processes to exit.
Nov 24 09:31:50 compute-1 systemd-logind[823]: Removed session 38.
Nov 24 09:31:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:31:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:50.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:31:50 compute-1 ceph-mon[80009]: pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:31:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:51 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:51 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:51 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8001ea0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093151 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:31:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:52.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:31:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:52.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:31:52 compute-1 ceph-mon[80009]: pgmap v139: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:31:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:31:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:53 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003ef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:53 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:53 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:31:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:54.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:31:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:54.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:54 compute-1 ceph-mon[80009]: pgmap v140: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:31:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:55 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003040 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:55 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003ef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:55 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:55 compute-1 sshd-session[95349]: Accepted publickey for zuul from 192.168.122.30 port 58982 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:31:55 compute-1 systemd-logind[823]: New session 39 of user zuul.
Nov 24 09:31:55 compute-1 systemd[1]: Started Session 39 of User zuul.
Nov 24 09:31:55 compute-1 sshd-session[95349]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:31:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:56.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:56 compute-1 ceph-mon[80009]: pgmap v141: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:31:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:56.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:56 compute-1 python3.9[95503]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:31:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:57 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:57 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003040 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:31:57 compute-1 sudo[95658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxelzszoqoqatzwdjhnjuzcosbdzuqiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976717.5702598-69-18007936271116/AnsiballZ_getent.py'
Nov 24 09:31:57 compute-1 sudo[95658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:58.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:58 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:58 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003ef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:58 compute-1 python3.9[95660]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 24 09:31:58 compute-1 sudo[95658]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:31:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:58.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:58 compute-1 ceph-mon[80009]: pgmap v142: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:31:58 compute-1 sudo[95812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsnyhnjipnyfslvugummzgnnaidnkwmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976718.6205916-105-132492764729886/AnsiballZ_setup.py'
Nov 24 09:31:58 compute-1 sudo[95812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:59 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003ef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:59 compute-1 python3.9[95814]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:31:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:59 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:59 compute-1 sudo[95819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:31:59 compute-1 sudo[95819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:31:59 compute-1 sudo[95819]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:59 compute-1 sudo[95812]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:59 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003040 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:59 compute-1 sudo[95921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnmmxoksnilimxswddabroelltkfnfnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976718.6205916-105-132492764729886/AnsiballZ_dnf.py'
Nov 24 09:31:59 compute-1 sudo[95921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:00 compute-1 python3.9[95923]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 09:32:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:00.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:00.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:32:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:32:00 compute-1 ceph-mon[80009]: pgmap v143: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:32:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:32:01 compute-1 anacron[29933]: Job `cron.weekly' started
Nov 24 09:32:01 compute-1 anacron[29933]: Job `cron.weekly' terminated
Nov 24 09:32:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:01 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:01 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003f10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:01 compute-1 sudo[95921]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:01 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:02.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:02 compute-1 sudo[96078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpimtpbxeoifqlusnuezxktantjsdywh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976721.8466039-147-253220384642128/AnsiballZ_dnf.py'
Nov 24 09:32:02 compute-1 sudo[96078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:02.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:02 compute-1 python3.9[96080]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:32:02 compute-1 ceph-mon[80009]: pgmap v144: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:32:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:32:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:03 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:03 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:03 compute-1 sudo[96078]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:03 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003f30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:04.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:04.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:04 compute-1 ceph-mon[80009]: pgmap v145: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:32:04 compute-1 sudo[96232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykzbcerhkkyhfsejnabnyeoxphswxytg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976723.9435039-171-229330796389815/AnsiballZ_systemd.py'
Nov 24 09:32:04 compute-1 sudo[96232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:04 compute-1 python3.9[96234]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 09:32:04 compute-1 sudo[96232]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:05 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:05 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:05 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:05 compute-1 python3.9[96387]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:32:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:06.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:06.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:06 compute-1 ceph-mon[80009]: pgmap v146: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:32:06 compute-1 sudo[96539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emklprokdbnlzuenzkokyzawohizlbky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976726.2467792-225-174836239189128/AnsiballZ_sefcontext.py'
Nov 24 09:32:06 compute-1 sudo[96539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:06 compute-1 python3.9[96541]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 24 09:32:07 compute-1 sudo[96539]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:07 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:07 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:07 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:07 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:32:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:08.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:08 compute-1 python3.9[96691]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:32:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:08.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:08 compute-1 ceph-mon[80009]: pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:32:09 compute-1 sudo[96848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usmvoaghlhraougbpnvjcfubfznucfzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976728.7415242-279-81805465114365/AnsiballZ_dnf.py'
Nov 24 09:32:09 compute-1 sudo[96848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:09 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:09 compute-1 python3.9[96850]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:32:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:09 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003f90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:09 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003f90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:10.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:10.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:10 compute-1 ceph-mon[80009]: pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:32:10 compute-1 sudo[96848]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:11 compute-1 sudo[97002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nahpcjaagpshnufgwqbaczhodzuicklh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976730.871664-303-84634907770619/AnsiballZ_command.py'
Nov 24 09:32:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:11 compute-1 sudo[97002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:11 compute-1 python3.9[97004]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:32:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:12.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:12 compute-1 sudo[97002]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:12.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:12 compute-1 ceph-mon[80009]: pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:32:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:32:13 compute-1 sudo[97290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugkxwkwflcyxjjtefhnttnsldvsguqwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976732.5913613-327-247504058005994/AnsiballZ_file.py'
Nov 24 09:32:13 compute-1 sudo[97290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:13 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003fb0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:13 compute-1 python3.9[97292]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 09:32:13 compute-1 sudo[97290]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:13 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:13 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc001090 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:14.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:14 compute-1 python3.9[97444]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:32:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:14.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:14 compute-1 ceph-mon[80009]: pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:32:14 compute-1 sudo[97596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjyxwsvqxykqgfoaxahssbsryedbkfid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976734.4123032-375-172113678589869/AnsiballZ_dnf.py'
Nov 24 09:32:14 compute-1 sudo[97596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:14 compute-1 python3.9[97598]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:32:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:15 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:15 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003fd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:32:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:32:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:32:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:15 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003fd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:16.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:16.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:16 compute-1 sudo[97596]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:16 compute-1 ceph-mon[80009]: pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:32:16 compute-1 sudo[97751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bytglkgjmyjmcfhrupbqxlnwffjwcwan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976736.7209191-402-207301640297709/AnsiballZ_dnf.py'
Nov 24 09:32:16 compute-1 sudo[97751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:17 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc001120 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:17 compute-1 python3.9[97753]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:32:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:17 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:17 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf00014d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:17 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:32:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:18.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:18.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:18 compute-1 sudo[97751]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:18 compute-1 ceph-mon[80009]: pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:32:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:19 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003fd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:19 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc002050 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:19 compute-1 sudo[97905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlmyjaaarsnflnjwtphnxxgsfjzdbhto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976739.0879533-438-12762359071024/AnsiballZ_stat.py'
Nov 24 09:32:19 compute-1 sudo[97905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:19 compute-1 sudo[97908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:32:19 compute-1 sudo[97908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:32:19 compute-1 sudo[97908]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:19 compute-1 python3.9[97907]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:32:19 compute-1 sudo[97905]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:19 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:20.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:20 compute-1 sudo[98085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yveedzxuecaaysggxrgkfqmredrigpxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976739.877171-462-145998175082381/AnsiballZ_slurp.py'
Nov 24 09:32:20 compute-1 sudo[98085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:20.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:20 compute-1 python3.9[98087]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 24 09:32:20 compute-1 sudo[98085]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:20 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093220 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:32:20 compute-1 ceph-mon[80009]: pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:32:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:21 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0001670 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:21 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003fd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:21 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc002050 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:21 compute-1 sshd-session[95353]: Connection closed by 192.168.122.30 port 58982
Nov 24 09:32:21 compute-1 sshd-session[95349]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:32:21 compute-1 systemd[1]: session-39.scope: Deactivated successfully.
Nov 24 09:32:21 compute-1 systemd[1]: session-39.scope: Consumed 17.461s CPU time.
Nov 24 09:32:21 compute-1 systemd-logind[823]: Session 39 logged out. Waiting for processes to exit.
Nov 24 09:32:21 compute-1 systemd-logind[823]: Removed session 39.
Nov 24 09:32:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:22.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:22.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:22 compute-1 ceph-mon[80009]: pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:32:22 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:32:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:23 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:23 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0001670 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:23 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003fd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:24.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:24.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:24 compute-1 ceph-mon[80009]: pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:32:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:25 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc002050 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:25 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400ac90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:25 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0001670 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:26.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:26.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:26 compute-1 ceph-mon[80009]: pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:32:26 compute-1 sshd-session[98115]: Accepted publickey for zuul from 192.168.122.30 port 60552 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:32:27 compute-1 systemd-logind[823]: New session 40 of user zuul.
Nov 24 09:32:27 compute-1 systemd[1]: Started Session 40 of User zuul.
Nov 24 09:32:27 compute-1 sshd-session[98115]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:32:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:27 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003ff0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:27 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc0036b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:27 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400ac90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:32:28 compute-1 python3.9[98268]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:32:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:28.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:28.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:28 compute-1 ceph-mon[80009]: pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:32:29 compute-1 python3.9[98423]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:32:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:29 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0001670 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:29 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4004010 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:29 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc0036b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:29 : epoch 69242567 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:32:29 compute-1 sudo[98512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:32:29 compute-1 sudo[98512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:32:29 compute-1 sudo[98512]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:29 compute-1 sudo[98569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:32:29 compute-1 sudo[98569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:32:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:30.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:32:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:32:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:30.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:30 compute-1 sudo[98569]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:30 compute-1 python3.9[98684]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:32:30 compute-1 ceph-mon[80009]: pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:32:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:32:30 compute-1 sshd-session[98118]: Connection closed by 192.168.122.30 port 60552
Nov 24 09:32:30 compute-1 sshd-session[98115]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:32:30 compute-1 systemd[1]: session-40.scope: Deactivated successfully.
Nov 24 09:32:30 compute-1 systemd[1]: session-40.scope: Consumed 2.241s CPU time.
Nov 24 09:32:30 compute-1 systemd-logind[823]: Session 40 logged out. Waiting for processes to exit.
Nov 24 09:32:30 compute-1 systemd-logind[823]: Removed session 40.
Nov 24 09:32:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:31 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc0036b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:31 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0001670 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:31 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4004030 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:31 compute-1 ceph-mon[80009]: pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:32:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:32.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:32.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:32:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:32:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:32:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:33 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc0036b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:33 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc0036b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:33 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:32:33 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:32:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:33 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:32:34 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:32:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:32:34 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:32:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:34.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:32:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:32:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:32:34 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:32:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:32:34 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:32:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:32:34 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:32:34 compute-1 ceph-mon[80009]: pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:32:34 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:32:34 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:32:34 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:32:34 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:32:34 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:32:34 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:32:34 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:32:34 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:32:34 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:32:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:32:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:34.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:32:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:35 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08002920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:35 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400ac90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:35 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc0043c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:35 compute-1 sshd-session[98729]: Accepted publickey for zuul from 192.168.122.30 port 38660 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:32:35 compute-1 systemd-logind[823]: New session 41 of user zuul.
Nov 24 09:32:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:35 : epoch 69242567 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:32:35 compute-1 systemd[1]: Started Session 41 of User zuul.
Nov 24 09:32:35 compute-1 sshd-session[98729]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:32:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:36.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:36 compute-1 ceph-mon[80009]: pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:32:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:36.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:36 compute-1 python3.9[98883]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:32:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:37 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:37 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08002920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:37 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400ac90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:37 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:32:38 compute-1 python3.9[99037]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:32:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:38.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:38 compute-1 ceph-mon[80009]: pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:32:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:38.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:38 compute-1 sshd-session[71216]: Received disconnect from 38.129.56.127 port 49808:11: disconnected by user
Nov 24 09:32:38 compute-1 sshd-session[71216]: Disconnected from user zuul 38.129.56.127 port 49808
Nov 24 09:32:38 compute-1 sshd-session[71213]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:32:38 compute-1 systemd[1]: session-19.scope: Deactivated successfully.
Nov 24 09:32:38 compute-1 systemd[1]: session-19.scope: Consumed 8.290s CPU time.
Nov 24 09:32:38 compute-1 systemd-logind[823]: Session 19 logged out. Waiting for processes to exit.
Nov 24 09:32:38 compute-1 systemd-logind[823]: Removed session 19.
Nov 24 09:32:38 compute-1 sudo[99192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjptnuhbrxirvqnozbbplpiljraehxoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976758.4402573-81-128151117656402/AnsiballZ_setup.py'
Nov 24 09:32:38 compute-1 sudo[99192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:38 compute-1 python3.9[99194]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:32:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:32:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:32:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:39 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400ac90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:39 compute-1 sudo[99202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:32:39 compute-1 sudo[99202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:32:39 compute-1 sudo[99202]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:39 compute-1 sudo[99192]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:39 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:39 compute-1 sudo[99251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:32:39 compute-1 sudo[99251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:32:39 compute-1 sudo[99251]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:39 compute-1 sudo[99326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srsrqglsjzzzryekblmnviaitgpzwmye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976758.4402573-81-128151117656402/AnsiballZ_dnf.py'
Nov 24 09:32:39 compute-1 sudo[99326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:39 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08002920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:39 compute-1 python3.9[99328]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:32:40 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:32:40 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:32:40 compute-1 ceph-mon[80009]: pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:32:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:40.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:40.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:41 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400ac90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:41 compute-1 sudo[99326]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:41 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc0043c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:41 compute-1 sudo[99480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efworcsgxhrfhhhhzbhtnowhpfojrsog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976761.3546982-117-52436865833354/AnsiballZ_setup.py'
Nov 24 09:32:41 compute-1 sudo[99480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:41 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:41 compute-1 python3.9[99482]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:32:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:42.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:42 compute-1 sudo[99480]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:42.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:42 compute-1 ceph-mon[80009]: pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:32:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093242 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:32:42 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:32:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:43 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08002920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:43 compute-1 sudo[99676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykqgsrxmlgeyxyttblimyzclwuibzxkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976762.8099627-150-192247153621162/AnsiballZ_file.py'
Nov 24 09:32:43 compute-1 sudo[99676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:43 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400ac90 fd 49 proxy ignored for local
Nov 24 09:32:43 compute-1 kernel: ganesha.nfsd[91109]: segfault at 50 ip 00007fcbbf51a32e sp 00007fcb8b7fd210 error 4 in libntirpc.so.5.8[7fcbbf4ff000+2c000] likely on CPU 1 (core 0, socket 1)
Nov 24 09:32:43 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:32:43 compute-1 systemd[1]: Started Process Core Dump (PID 99679/UID 0).
Nov 24 09:32:43 compute-1 python3.9[99678]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:32:43 compute-1 sudo[99676]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:44.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:44 compute-1 sudo[99831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zixsplhzfbicexiibpqixefoqotnvfig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976763.8029537-174-232782223543955/AnsiballZ_command.py'
Nov 24 09:32:44 compute-1 sudo[99831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:44.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:44 compute-1 ceph-mon[80009]: pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:32:44 compute-1 python3.9[99833]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:32:44 compute-1 systemd-coredump[99680]: Process 86459 (ganesha.nfsd) of user 0 dumped core.
                                                   
                                                   Stack trace of thread 61:
                                                   #0  0x00007fcbbf51a32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                   ELF object binary architecture: AMD x86-64
Nov 24 09:32:44 compute-1 sudo[99831]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:44 compute-1 systemd[1]: systemd-coredump@1-99679-0.service: Deactivated successfully.
Nov 24 09:32:44 compute-1 systemd[1]: systemd-coredump@1-99679-0.service: Consumed 1.148s CPU time.
Nov 24 09:32:44 compute-1 podman[99864]: 2025-11-24 09:32:44.594103006 +0000 UTC m=+0.029659727 container died f7b0c338b36b8bdf518e2bc42241679b81bdb5d1de06a8d4b736922ad905c10c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 09:32:44 compute-1 systemd[1]: var-lib-containers-storage-overlay-a5a0ea98e4966eef6e9359d2b74c6f9b539ca052e7e8e3709d750180e14075b4-merged.mount: Deactivated successfully.
Nov 24 09:32:44 compute-1 podman[99864]: 2025-11-24 09:32:44.639706034 +0000 UTC m=+0.075262705 container remove f7b0c338b36b8bdf518e2bc42241679b81bdb5d1de06a8d4b736922ad905c10c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 24 09:32:44 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:32:44 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 09:32:44 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.794s CPU time.
Nov 24 09:32:45 compute-1 sudo[100043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gftottflvhbdtxfhbymcdhokbahpvmgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976764.9296312-198-124860250746272/AnsiballZ_stat.py'
Nov 24 09:32:45 compute-1 sudo[100043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:32:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:32:45 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:32:45 compute-1 python3.9[100045]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:32:45 compute-1 sudo[100043]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:45 compute-1 sudo[100121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evhqyxilxyzxjgrvbxmvmngfoohpsdpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976764.9296312-198-124860250746272/AnsiballZ_file.py'
Nov 24 09:32:45 compute-1 sudo[100121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:45 compute-1 python3.9[100123]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:32:45 compute-1 sudo[100121]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:46.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:46.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:46 compute-1 ceph-mon[80009]: pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:32:46 compute-1 sudo[100274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyytrbuuuhdefuyhdgfbcrdyqbekxapd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976766.35643-234-94630927704717/AnsiballZ_stat.py'
Nov 24 09:32:46 compute-1 sudo[100274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:46 compute-1 python3.9[100276]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:32:46 compute-1 sudo[100274]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:47 compute-1 sudo[100352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phlxotmqzykrmkjngfzesnowfudrzcjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976766.35643-234-94630927704717/AnsiballZ_file.py'
Nov 24 09:32:47 compute-1 sudo[100352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:47 compute-1 python3.9[100354]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:32:47 compute-1 sudo[100352]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:47 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:32:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:48.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:48 compute-1 sudo[100505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wttjxijodtkjcccpvdweczkqszupsmsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976767.7943382-273-82633549613879/AnsiballZ_ini_file.py'
Nov 24 09:32:48 compute-1 sudo[100505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:48 compute-1 python3.9[100507]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:32:48 compute-1 sudo[100505]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:48.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:48 compute-1 ceph-mon[80009]: pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:32:48 compute-1 sudo[100657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkkuswkhlijoulrqbnvlqcvtuapfdjfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976768.4794052-273-155993958787434/AnsiballZ_ini_file.py'
Nov 24 09:32:48 compute-1 sudo[100657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:48 compute-1 python3.9[100659]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:32:48 compute-1 sudo[100657]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093249 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:32:49 compute-1 sudo[100809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwkheiyrdtvpiyayhdrfwxzywzzvngjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976769.0705688-273-220565309957287/AnsiballZ_ini_file.py'
Nov 24 09:32:49 compute-1 sudo[100809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:49 compute-1 python3.9[100811]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:32:49 compute-1 sudo[100809]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:50 compute-1 sudo[100962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euotwcbjpzsswaugmzaiecogvazejenv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976769.7684498-273-249284804417073/AnsiballZ_ini_file.py'
Nov 24 09:32:50 compute-1 sudo[100962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:50.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:50 compute-1 python3.9[100964]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:32:50 compute-1 sudo[100962]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:50.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:50 compute-1 ceph-mon[80009]: pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:32:51 compute-1 sudo[101114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmolzrmhxokdzpdmvxzxsrsjcihzdaxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976770.9472368-366-191340595413967/AnsiballZ_dnf.py'
Nov 24 09:32:51 compute-1 sudo[101114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:51 compute-1 python3.9[101116]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:32:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:52.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:52.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:52 compute-1 ceph-mon[80009]: pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:32:52 compute-1 sudo[101114]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:32:53 compute-1 sudo[101268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kskmvrncrskfsbtbfzhdtsyvqvyfpime ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976773.2886314-399-86538626588027/AnsiballZ_setup.py'
Nov 24 09:32:53 compute-1 sudo[101268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:53 compute-1 python3.9[101270]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:32:53 compute-1 sudo[101268]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:54.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:54 compute-1 sudo[101423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrrsouhzsngxsbtzxgihsrcdlbiuegnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976774.0932975-423-196397348967435/AnsiballZ_stat.py'
Nov 24 09:32:54 compute-1 sudo[101423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:54.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:54 compute-1 ceph-mon[80009]: pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:32:54 compute-1 python3.9[101425]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:32:54 compute-1 sudo[101423]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:54 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 2.
Nov 24 09:32:54 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:32:54 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.794s CPU time.
Nov 24 09:32:55 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:32:55 compute-1 podman[101568]: 2025-11-24 09:32:55.205348135 +0000 UTC m=+0.034626120 container create 89e9e06f9211bc7e046f65662fa13ddb2cb8e39af97781a83cf33fe42fb7a133 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 24 09:32:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08c191ca12641a1e68bb6e456ff7064fde48d688012042f510575c5e2cb0c34/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:32:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08c191ca12641a1e68bb6e456ff7064fde48d688012042f510575c5e2cb0c34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:32:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08c191ca12641a1e68bb6e456ff7064fde48d688012042f510575c5e2cb0c34/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:32:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08c191ca12641a1e68bb6e456ff7064fde48d688012042f510575c5e2cb0c34/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:32:55 compute-1 podman[101568]: 2025-11-24 09:32:55.257027981 +0000 UTC m=+0.086305896 container init 89e9e06f9211bc7e046f65662fa13ddb2cb8e39af97781a83cf33fe42fb7a133 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:32:55 compute-1 podman[101568]: 2025-11-24 09:32:55.262027383 +0000 UTC m=+0.091305278 container start 89e9e06f9211bc7e046f65662fa13ddb2cb8e39af97781a83cf33fe42fb7a133 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 24 09:32:55 compute-1 bash[101568]: 89e9e06f9211bc7e046f65662fa13ddb2cb8e39af97781a83cf33fe42fb7a133
Nov 24 09:32:55 compute-1 podman[101568]: 2025-11-24 09:32:55.190605313 +0000 UTC m=+0.019883198 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:32:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:32:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:32:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:32:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:32:55 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:32:55 compute-1 sudo[101641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lybeyqyxzkgsuwsmbszvjnqsdhzuomaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976775.0677843-450-79615099465427/AnsiballZ_stat.py'
Nov 24 09:32:55 compute-1 sudo[101641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:32:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:32:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:32:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:32:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:32:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:32:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:32:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:32:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:32:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:32:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:32:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:32:55 compute-1 python3.9[101658]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:32:55 compute-1 sudo[101641]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:56.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:56 compute-1 ceph-mon[80009]: pgmap v171: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:32:56 compute-1 sudo[101831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oplcwjwmskzngtwbeuctabroudjqjxyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976775.9859197-480-77571154889944/AnsiballZ_command.py'
Nov 24 09:32:56 compute-1 sudo[101831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:56.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:56 compute-1 python3.9[101833]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:32:56 compute-1 sudo[101831]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:57 compute-1 sudo[101984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxrerkddyqgcektboihpjthrxxcrdzin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976776.9093637-510-229603778040171/AnsiballZ_service_facts.py'
Nov 24 09:32:57 compute-1 sudo[101984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:57 compute-1 python3.9[101986]: ansible-service_facts Invoked
Nov 24 09:32:57 compute-1 network[102003]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 09:32:57 compute-1 network[102004]: 'network-scripts' will be removed from distribution in near future.
Nov 24 09:32:57 compute-1 network[102005]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 09:32:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:32:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:58.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:32:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:58.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:58 compute-1 ceph-mon[80009]: pgmap v172: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:32:59 compute-1 sudo[102096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:32:59 compute-1 sudo[102096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:32:59 compute-1 sudo[102096]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:00 compute-1 sudo[101984]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:33:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:00.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:33:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:33:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:33:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:00.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:00 compute-1 ceph-mon[80009]: pgmap v173: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:33:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:33:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:01 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:33:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:01 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:33:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:02.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:02.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:02 compute-1 ceph-mon[80009]: pgmap v174: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:33:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:33:03 compute-1 sudo[102316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffrtylmgmrpewdlnipxhifjygwvgehnu ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1763976782.9330647-555-185222383777291/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1763976782.9330647-555-185222383777291/args'
Nov 24 09:33:03 compute-1 sudo[102316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:03 compute-1 sudo[102316]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:04.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:04 compute-1 sudo[102484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiqatujmjcahynzuqdzmexuqpsfwssfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976783.8952737-588-161596920749779/AnsiballZ_dnf.py'
Nov 24 09:33:04 compute-1 sudo[102484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:04 compute-1 python3.9[102486]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:33:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:04.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:04 compute-1 ceph-mon[80009]: pgmap v175: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:33:05 compute-1 sudo[102484]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:33:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:06.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:33:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:06.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:06 compute-1 ceph-mon[80009]: pgmap v176: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:33:06 compute-1 sudo[102638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbaxbydbsvhaesjvscmbbonpdkeumrko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976786.3351753-627-230805835853407/AnsiballZ_package_facts.py'
Nov 24 09:33:06 compute-1 sudo[102638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:07 compute-1 python3.9[102640]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 24 09:33:07 compute-1 sudo[102638]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:33:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:07 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:33:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:08.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:08.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:08 compute-1 ceph-mon[80009]: pgmap v177: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:33:08 compute-1 sudo[102806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erdkffoefifgvrltlebnrnmbxqgqaduu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976788.1143818-658-15488881758522/AnsiballZ_stat.py'
Nov 24 09:33:08 compute-1 sudo[102806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:08 compute-1 python3.9[102808]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:08 compute-1 sudo[102806]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:09 compute-1 sudo[102884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awvcrbpspzmfnelfkywjllhjtyxnoyvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976788.1143818-658-15488881758522/AnsiballZ_file.py'
Nov 24 09:33:09 compute-1 sudo[102884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:09 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:09 compute-1 python3.9[102886]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:09 compute-1 sudo[102884]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:09 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:09 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:09 compute-1 sudo[103037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmzyzvcghrurdvvsvpfsruaibpubirws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976789.6579504-694-272868232482622/AnsiballZ_stat.py'
Nov 24 09:33:09 compute-1 sudo[103037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:10 compute-1 python3.9[103039]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:10.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:10 compute-1 sudo[103037]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:10 compute-1 sudo[103115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnfgowekmbaranygptxswcbecizfwtza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976789.6579504-694-272868232482622/AnsiballZ_file.py'
Nov 24 09:33:10 compute-1 sudo[103115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:10.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:10 compute-1 python3.9[103117]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:10 compute-1 sudo[103115]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:10 compute-1 ceph-mon[80009]: pgmap v178: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:33:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:11 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093311 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:33:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:11 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:11 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:12 compute-1 sudo[103268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgfmbobsopeqpymmeviykhmyqhzvovti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976791.7407231-748-50509658828513/AnsiballZ_lineinfile.py'
Nov 24 09:33:12 compute-1 sudo[103268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:12.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:12 compute-1 python3.9[103270]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:12 compute-1 sudo[103268]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:12.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:12 compute-1 ceph-mon[80009]: pgmap v179: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:33:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:33:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:13 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:13 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:13 compute-1 sudo[103421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jatrqdolhyflrawrelpelfwrirzuiygp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976793.4122512-793-177727288393697/AnsiballZ_setup.py'
Nov 24 09:33:13 compute-1 sudo[103421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:13 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84001f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:13 compute-1 python3.9[103423]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:33:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:14.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:14 compute-1 sudo[103421]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:14.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:14 compute-1 ceph-mon[80009]: pgmap v180: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:33:14 compute-1 sudo[103506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqoijtdfanngklrbdhgfvnfadwtrwdrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976793.4122512-793-177727288393697/AnsiballZ_systemd.py'
Nov 24 09:33:14 compute-1 sudo[103506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:14 compute-1 python3.9[103508]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:33:14 compute-1 sudo[103506]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:15 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:15 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:33:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:33:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:33:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:15 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:16 compute-1 sshd-session[98733]: Connection closed by 192.168.122.30 port 38660
Nov 24 09:33:16 compute-1 sshd-session[98729]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:33:16 compute-1 systemd-logind[823]: Session 41 logged out. Waiting for processes to exit.
Nov 24 09:33:16 compute-1 systemd[1]: session-41.scope: Deactivated successfully.
Nov 24 09:33:16 compute-1 systemd[1]: session-41.scope: Consumed 22.420s CPU time.
Nov 24 09:33:16 compute-1 systemd-logind[823]: Removed session 41.
Nov 24 09:33:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:33:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:16.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:33:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:16.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:16 compute-1 ceph-mon[80009]: pgmap v181: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:33:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:17 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84001f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:17 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:17 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:17 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:33:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:33:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:18.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:33:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:18.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:18 compute-1 ceph-mon[80009]: pgmap v182: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:33:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:19 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:19 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84001f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:19 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:19 compute-1 sudo[103537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:33:19 compute-1 sudo[103537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:33:19 compute-1 sudo[103537]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:33:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:20.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:33:20 compute-1 systemd[83435]: Created slice User Background Tasks Slice.
Nov 24 09:33:20 compute-1 systemd[83435]: Starting Cleanup of User's Temporary Files and Directories...
Nov 24 09:33:20 compute-1 systemd[83435]: Finished Cleanup of User's Temporary Files and Directories.
Nov 24 09:33:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:20.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:20 compute-1 ceph-mon[80009]: pgmap v183: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:33:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:21 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:21 compute-1 sshd-session[103564]: Accepted publickey for zuul from 192.168.122.30 port 44354 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:33:21 compute-1 systemd-logind[823]: New session 42 of user zuul.
Nov 24 09:33:21 compute-1 systemd[1]: Started Session 42 of User zuul.
Nov 24 09:33:21 compute-1 sshd-session[103564]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:33:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:21 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:21 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:21 compute-1 sudo[103717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulyluajbxnxkemxcarptriauslxegkek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976801.3919878-27-88030508886881/AnsiballZ_file.py'
Nov 24 09:33:21 compute-1 sudo[103717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:22 compute-1 python3.9[103720]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:22 compute-1 sudo[103717]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:33:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:22.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:33:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:22.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:22 compute-1 sudo[103870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwjiahiknnrzvuxjkfyujmqfwkycvcrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976802.2627285-63-235421741175245/AnsiballZ_stat.py'
Nov 24 09:33:22 compute-1 sudo[103870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:22 compute-1 ceph-mon[80009]: pgmap v184: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:33:22 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:33:22 compute-1 python3.9[103872]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:22 compute-1 sudo[103870]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:23 compute-1 sudo[103948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udabesyruscvbqxhenrbqspafoidlrko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976802.2627285-63-235421741175245/AnsiballZ_file.py'
Nov 24 09:33:23 compute-1 sudo[103948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:23 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:23 compute-1 python3.9[103950]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:23 compute-1 sudo[103948]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:23 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:23 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea800036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:23 compute-1 sshd-session[103567]: Connection closed by 192.168.122.30 port 44354
Nov 24 09:33:23 compute-1 sshd-session[103564]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:33:23 compute-1 systemd[1]: session-42.scope: Deactivated successfully.
Nov 24 09:33:23 compute-1 systemd[1]: session-42.scope: Consumed 1.376s CPU time.
Nov 24 09:33:23 compute-1 systemd-logind[823]: Session 42 logged out. Waiting for processes to exit.
Nov 24 09:33:23 compute-1 systemd-logind[823]: Removed session 42.
Nov 24 09:33:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:24.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:33:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:24.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:33:24 compute-1 ceph-mon[80009]: pgmap v185: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:25 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:25 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:25 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:33:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:26.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:33:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:33:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:26.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:33:26 compute-1 ceph-mon[80009]: pgmap v186: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:27 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea800036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:27 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:27 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa400a060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:33:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:28.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:28.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:28 compute-1 ceph-mon[80009]: pgmap v187: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:33:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:29 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:29 compute-1 sshd-session[103979]: Accepted publickey for zuul from 192.168.122.30 port 53002 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:33:29 compute-1 systemd-logind[823]: New session 43 of user zuul.
Nov 24 09:33:29 compute-1 systemd[1]: Started Session 43 of User zuul.
Nov 24 09:33:29 compute-1 sshd-session[103979]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:33:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:29 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea800036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:29 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:33:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:30.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:33:30 compute-1 python3.9[104133]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:33:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:33:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:33:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:30.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:30 compute-1 ceph-mon[80009]: pgmap v188: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:33:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:31 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa400aa30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:31 compute-1 sudo[104287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ponozxvwhykbpokmiftvykkgaihiemrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976810.9218059-60-18269873487389/AnsiballZ_file.py'
Nov 24 09:33:31 compute-1 sudo[104287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:31 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:31 compute-1 python3.9[104289]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:31 compute-1 sudo[104287]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:31 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:33:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:32.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:33:32 compute-1 sudo[104463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgzxiwhfbprbvgmrufsbrmschxxabhjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976811.813535-84-169899845237736/AnsiballZ_stat.py'
Nov 24 09:33:32 compute-1 sudo[104463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:32 compute-1 python3.9[104465]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:32 compute-1 sudo[104463]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:32.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:32 compute-1 ceph-mon[80009]: pgmap v189: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:33:32 compute-1 sudo[104541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdjsnexstpztglqbmywejzsushjixvox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976811.813535-84-169899845237736/AnsiballZ_file.py'
Nov 24 09:33:32 compute-1 sudo[104541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:33:32 compute-1 python3.9[104543]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.8adpmnha recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:32 compute-1 sudo[104541]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:33 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:33 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa400aa30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:33 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:34 compute-1 sudo[104694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obcqfobzcqodmbttmqwgtfubhorqchxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976813.7865946-144-152075715236449/AnsiballZ_stat.py'
Nov 24 09:33:34 compute-1 sudo[104694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:34.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:34 compute-1 python3.9[104696]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:34 compute-1 sudo[104694]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:33:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:34.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:33:34 compute-1 sudo[104772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyxdjhrmljmzlspkjqsioovasbnmhwev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976813.7865946-144-152075715236449/AnsiballZ_file.py'
Nov 24 09:33:34 compute-1 sudo[104772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:34 compute-1 python3.9[104774]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.s94i7n75 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:34 compute-1 sudo[104772]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:34 compute-1 ceph-mon[80009]: pgmap v190: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:35 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:35 compute-1 sudo[104924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nogizwmatggjdaoioxyysyvvbdshmatm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976815.0536165-183-185765879607221/AnsiballZ_file.py'
Nov 24 09:33:35 compute-1 sudo[104924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:35 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:35 compute-1 python3.9[104926]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:33:35 compute-1 sudo[104924]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:35 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:36 compute-1 sudo[105077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyidamrgprhnbnyoersguwaessoqlzxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976815.845649-207-49962302113522/AnsiballZ_stat.py'
Nov 24 09:33:36 compute-1 sudo[105077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:36.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:36 compute-1 python3.9[105079]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:36 compute-1 sudo[105077]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:36.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:36 compute-1 sudo[105155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nndnzzgqxaqlyyainvdusichgomsmoed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976815.845649-207-49962302113522/AnsiballZ_file.py'
Nov 24 09:33:36 compute-1 sudo[105155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:36 compute-1 python3.9[105157]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:33:36 compute-1 sudo[105155]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:36 compute-1 ceph-mon[80009]: pgmap v191: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:37 compute-1 sudo[105307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfbaodigcakmfbibiiiumnlwjisswrej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976816.9361582-207-228133790996257/AnsiballZ_stat.py'
Nov 24 09:33:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:37 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:37 compute-1 sudo[105307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:37 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:37 compute-1 python3.9[105309]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:37 compute-1 sudo[105307]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:37 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa400aa30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:37 compute-1 sudo[105385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tamhdzathzxrmfqcwgahinzmsndiwgyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976816.9361582-207-228133790996257/AnsiballZ_file.py'
Nov 24 09:33:37 compute-1 sudo[105385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:37 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:33:37 compute-1 python3.9[105387]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:33:38 compute-1 sudo[105385]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:38.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:38.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:38 compute-1 sudo[105538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thtraricbttycauwdhfgcaicbcrokudk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976818.2785437-276-175371268635330/AnsiballZ_file.py'
Nov 24 09:33:38 compute-1 sudo[105538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:38 compute-1 python3.9[105540]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:38 compute-1 sudo[105538]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:38 compute-1 ceph-mon[80009]: pgmap v192: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:33:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:39 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:39 compute-1 sudo[105692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvkicpvfrcbamiqkdqhbbgwkctdapgrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976819.0668871-300-98026000068893/AnsiballZ_stat.py'
Nov 24 09:33:39 compute-1 sudo[105692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:39 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:39 compute-1 sudo[105695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:33:39 compute-1 sudo[105695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:33:39 compute-1 sudo[105695]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:39 compute-1 sudo[105720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:33:39 compute-1 sudo[105720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:33:39 compute-1 python3.9[105694]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:39 compute-1 sudo[105692]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:39 compute-1 sudo[105834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxwtqmhrcztrvmyrzovvarggapjqvbsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976819.0668871-300-98026000068893/AnsiballZ_file.py'
Nov 24 09:33:39 compute-1 sudo[105834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:39 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:39 compute-1 sudo[105842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:33:39 compute-1 sudo[105842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:33:39 compute-1 sudo[105842]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:39 compute-1 python3.9[105838]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:39 compute-1 sudo[105720]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:39 compute-1 sudo[105834]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:40 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:33:40 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:33:40 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:33:40 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:33:40 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:33:40 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:33:40 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:33:40 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:33:40 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:33:40 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:33:40 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:33:40 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:33:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:40.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:33:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:40.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:33:40 compute-1 sudo[106029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybrtbcudwffuwlbsjcrwtsmnrivlknkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976820.3423405-336-133727246784960/AnsiballZ_stat.py'
Nov 24 09:33:40 compute-1 sudo[106029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:40 compute-1 python3.9[106031]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:40 compute-1 ceph-mon[80009]: pgmap v193: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:40 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:33:40 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:33:40 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:33:40 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:33:40 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:33:40 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:33:40 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:33:40 compute-1 sudo[106029]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:41 compute-1 sudo[106107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igorymcidusootvmyakgtbpchaefvwsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976820.3423405-336-133727246784960/AnsiballZ_file.py'
Nov 24 09:33:41 compute-1 sudo[106107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:41 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:41 compute-1 python3.9[106109]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:41 compute-1 sudo[106107]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:41 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:41 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:33:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:42.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:33:42 compute-1 sudo[106260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztvrxqubketkncpxuotgxziuyzmjfvlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976821.6255856-372-260667279326409/AnsiballZ_systemd.py'
Nov 24 09:33:42 compute-1 sudo[106260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:42.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:42 compute-1 python3.9[106262]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:33:42 compute-1 systemd[1]: Reloading.
Nov 24 09:33:42 compute-1 systemd-rc-local-generator[106291]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:33:42 compute-1 systemd-sysv-generator[106294]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:33:42 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:33:42 compute-1 ceph-mon[80009]: pgmap v194: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:33:43 compute-1 sudo[106260]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:43 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:43 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea680016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:43 compute-1 sudo[106450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iujnhfayuujpkazaextqweprykvtfsza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976823.341891-396-102584814279533/AnsiballZ_stat.py'
Nov 24 09:33:43 compute-1 sudo[106450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:43 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:43 compute-1 python3.9[106452]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:43 compute-1 sudo[106450]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:44 compute-1 sudo[106529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bubpnjxrehrkxwdvpzvfxasdyqixbvnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976823.341891-396-102584814279533/AnsiballZ_file.py'
Nov 24 09:33:44 compute-1 sudo[106529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:33:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:44.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:33:44 compute-1 python3.9[106531]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:44 compute-1 sudo[106529]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:44.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:33:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:33:44 compute-1 sudo[106631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:33:44 compute-1 sudo[106631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:33:44 compute-1 sudo[106631]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:44 compute-1 ceph-mon[80009]: pgmap v195: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:44 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:33:44 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:33:44 compute-1 sudo[106706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pficyyyfyjcsrsmqpyaffcbzikmiyzfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976824.6192193-432-192984356505520/AnsiballZ_stat.py'
Nov 24 09:33:44 compute-1 sudo[106706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:45 compute-1 python3.9[106708]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:45 compute-1 sudo[106706]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:45 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:45 compute-1 sudo[106784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddqnizbjyfnkqrerjikodkvloplzalcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976824.6192193-432-192984356505520/AnsiballZ_file.py'
Nov 24 09:33:45 compute-1 sudo[106784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:45 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:33:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:33:45 compute-1 python3.9[106786]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:45 compute-1 sudo[106784]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:45 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:45 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:33:46 compute-1 sudo[106937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yavdkmmzowwxefruheoxqsihrwmcikab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976825.8782609-468-239795560823085/AnsiballZ_systemd.py'
Nov 24 09:33:46 compute-1 sudo[106937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:46.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:46 compute-1 python3.9[106939]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:33:46 compute-1 systemd[1]: Reloading.
Nov 24 09:33:46 compute-1 systemd-rc-local-generator[106965]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:33:46 compute-1 systemd-sysv-generator[106968]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:33:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:46.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:46 compute-1 systemd[1]: Starting Create netns directory...
Nov 24 09:33:46 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 09:33:46 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 09:33:46 compute-1 systemd[1]: Finished Create netns directory.
Nov 24 09:33:46 compute-1 sudo[106937]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:46 compute-1 ceph-mon[80009]: pgmap v196: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:47 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea680016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:47 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:47 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:47 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:33:47 compute-1 python3.9[107131]: ansible-ansible.builtin.service_facts Invoked
Nov 24 09:33:47 compute-1 network[107149]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 09:33:47 compute-1 network[107150]: 'network-scripts' will be removed from distribution in near future.
Nov 24 09:33:47 compute-1 network[107151]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 09:33:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:48.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:48.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:48 compute-1 ceph-mon[80009]: pgmap v197: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:33:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:49 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:49 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea680016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:49 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:50.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:50.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:50 compute-1 ceph-mon[80009]: pgmap v198: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:51 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:51 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:51 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:52.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:52.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:52 compute-1 sudo[107413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldtndlkxqqgndavpsarcnfwqpjjrwero ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976832.2556283-546-112092295462364/AnsiballZ_stat.py'
Nov 24 09:33:52 compute-1 sudo[107413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:52 compute-1 python3.9[107415]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:52 compute-1 sudo[107413]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:33:52 compute-1 ceph-mon[80009]: pgmap v199: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:33:53 compute-1 sudo[107491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsugjaelxjqzfewjzcgpzntgthcdgzid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976832.2556283-546-112092295462364/AnsiballZ_file.py'
Nov 24 09:33:53 compute-1 sudo[107491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:53 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:53 compute-1 python3.9[107493]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:53 compute-1 sudo[107491]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:53 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:53 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:53 compute-1 sudo[107644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyksrynolwvhfymaodrsbndveaogdjbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976833.6347442-585-241120095111054/AnsiballZ_file.py'
Nov 24 09:33:53 compute-1 sudo[107644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:54 compute-1 python3.9[107646]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:54 compute-1 sudo[107644]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:54.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:54.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:54 compute-1 sudo[107796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqllqkklkqsntrazszlkxftimlfggwkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976834.307681-609-120631409849953/AnsiballZ_stat.py'
Nov 24 09:33:54 compute-1 sudo[107796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:54 compute-1 python3.9[107798]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:54 compute-1 sudo[107796]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:54 compute-1 ceph-mon[80009]: pgmap v200: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:54 compute-1 sudo[107874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpthmucfgrivmxdwpslvzqfquqimkocw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976834.307681-609-120631409849953/AnsiballZ_file.py'
Nov 24 09:33:54 compute-1 sudo[107874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:55 compute-1 python3.9[107876]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:55 compute-1 sudo[107874]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:56.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:56.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:57 compute-1 sudo[108027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwdhvmhjgrqyzbphjsjbjzkontmrcrfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976836.6156049-654-222807591648626/AnsiballZ_timezone.py'
Nov 24 09:33:57 compute-1 sudo[108027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:57 compute-1 ceph-mon[80009]: pgmap v201: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:57 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:57 compute-1 python3.9[108029]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 24 09:33:57 compute-1 systemd[1]: Starting Time & Date Service...
Nov 24 09:33:57 compute-1 systemd[1]: Started Time & Date Service.
Nov 24 09:33:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:57 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:57 compute-1 sudo[108027]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:57 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:33:58 compute-1 sudo[108184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbfzqyvqrrkpppqyetxdtdwrhimebmgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976837.8248067-681-192418555010977/AnsiballZ_file.py'
Nov 24 09:33:58 compute-1 sudo[108184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:33:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:58.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:33:58 compute-1 python3.9[108186]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:58 compute-1 sudo[108184]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:33:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:58.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:58 compute-1 sudo[108336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kexuxyoyzftmiauerpffjyseofpuxqmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976838.6086893-705-255054090687213/AnsiballZ_stat.py'
Nov 24 09:33:58 compute-1 sudo[108336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:59 compute-1 python3.9[108338]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:59 compute-1 sudo[108336]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:59 compute-1 ceph-mon[80009]: pgmap v202: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:33:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:59 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:59 compute-1 sudo[108414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxrzjdtsrkivghsazqmuyamzgqlfyvgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976838.6086893-705-255054090687213/AnsiballZ_file.py'
Nov 24 09:33:59 compute-1 sudo[108414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:59 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:59 compute-1 python3.9[108416]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:59 compute-1 sudo[108414]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:59 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:00 compute-1 sudo[108488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:34:00 compute-1 sudo[108488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:34:00 compute-1 sudo[108488]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:00 compute-1 ceph-mon[80009]: pgmap v203: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:00.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:00 compute-1 sudo[108592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgtrofzzejrzkdiermkfzyqdiwwkxtgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976839.930571-742-6975605956486/AnsiballZ_stat.py'
Nov 24 09:34:00 compute-1 sudo[108592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:00 compute-1 python3.9[108594]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:34:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:34:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:34:00 compute-1 sudo[108592]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:00.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:00 compute-1 sudo[108670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxzsfckotxtljcwhmknvbqkxoqukjoyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976839.930571-742-6975605956486/AnsiballZ_file.py'
Nov 24 09:34:00 compute-1 sudo[108670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:00 compute-1 python3.9[108672]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.0kt2xv6y recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:00 compute-1 sudo[108670]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:01 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:34:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:01 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:01 compute-1 sudo[108822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjorzgneuwycpwqwyscltuhuivtlfjvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976841.3215072-777-73566663145971/AnsiballZ_stat.py'
Nov 24 09:34:01 compute-1 sudo[108822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:01 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70003c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:01 compute-1 python3.9[108824]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:34:01 compute-1 sudo[108822]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:02 compute-1 sudo[108901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbleuciroeuoxxvhvkeavwmmlszrnjkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976841.3215072-777-73566663145971/AnsiballZ_file.py'
Nov 24 09:34:02 compute-1 sudo[108901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:02.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:34:02 compute-1 ceph-mon[80009]: pgmap v204: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:34:02 compute-1 python3.9[108903]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:02 compute-1 sudo[108901]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:02.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:34:02 compute-1 sudo[109053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxywdulwyvupayrwfrcrwezqggwasutc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976842.6374476-817-181310084109489/AnsiballZ_command.py'
Nov 24 09:34:03 compute-1 sudo[109053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:03 compute-1 python3.9[109055]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:34:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:03 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:03 compute-1 sudo[109053]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:03 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:03 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:03 compute-1 sudo[109207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvfeioxhoaqlvytifdvbqkugtsrqudrn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763976843.5048265-840-200146187830589/AnsiballZ_edpm_nftables_from_files.py'
Nov 24 09:34:03 compute-1 sudo[109207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:04 compute-1 python3[109209]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 09:34:04 compute-1 sudo[109207]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:04.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:04 compute-1 ceph-mon[80009]: pgmap v205: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:04.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:04 compute-1 sudo[109359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzalshxnbfcxzrygebtmguovhqaupgjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976844.3594718-864-144788975182756/AnsiballZ_stat.py'
Nov 24 09:34:04 compute-1 sudo[109359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:04 compute-1 python3.9[109361]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:34:04 compute-1 sudo[109359]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:05 compute-1 sudo[109437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egumwdmncilbyyuqanvbtfqvjzebnwcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976844.3594718-864-144788975182756/AnsiballZ_file.py'
Nov 24 09:34:05 compute-1 sudo[109437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:05 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70003c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:05 compute-1 python3.9[109439]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:05 compute-1 sudo[109437]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:05 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:05 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:06 compute-1 sudo[109590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbplndgqlucergzvftxmnhjimbnefvlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976845.6986334-900-46183436811291/AnsiballZ_stat.py'
Nov 24 09:34:06 compute-1 sudo[109590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:06 compute-1 python3.9[109592]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:34:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:06.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:06 compute-1 sudo[109590]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:06 compute-1 sudo[109668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnsdhczyfelctisnpkmicnrxknhelhrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976845.6986334-900-46183436811291/AnsiballZ_file.py'
Nov 24 09:34:06 compute-1 sudo[109668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:06.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:06 compute-1 ceph-mon[80009]: pgmap v206: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:06 compute-1 python3.9[109670]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:06 compute-1 sudo[109668]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:07 compute-1 sudo[109820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sopkantnnnxzowbftlkmdfwyymeeeqwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976847.040379-936-252478796738698/AnsiballZ_stat.py'
Nov 24 09:34:07 compute-1 sudo[109820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70003c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:07 compute-1 python3.9[109822]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:34:07 compute-1 sudo[109820]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:07 compute-1 sudo[109898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tggsixkvlqdjukgojwqpuucytcfaunom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976847.040379-936-252478796738698/AnsiballZ_file.py'
Nov 24 09:34:07 compute-1 sudo[109898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:07 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:34:08 compute-1 python3.9[109900]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:08 compute-1 sudo[109898]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:08.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:08.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:08 compute-1 sudo[110051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znkipyivjmlcjomuqarkgezomtgzyiad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976848.3108616-972-252521360308287/AnsiballZ_stat.py'
Nov 24 09:34:08 compute-1 sudo[110051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:08 compute-1 ceph-mon[80009]: pgmap v207: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:34:08 compute-1 python3.9[110053]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:34:08 compute-1 sudo[110051]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:08 compute-1 sudo[110129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiktvujmwslypwuufkxcqqodgfbkadmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976848.3108616-972-252521360308287/AnsiballZ_file.py'
Nov 24 09:34:08 compute-1 sudo[110129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:09 compute-1 python3.9[110131]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:09 compute-1 sudo[110129]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:09 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:09 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:09 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:09 compute-1 sudo[110283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydshevefhuwjkapstezkaxikbpbwkuga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976849.5457332-1008-90476290475352/AnsiballZ_stat.py'
Nov 24 09:34:09 compute-1 sudo[110283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:10 compute-1 python3.9[110285]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:34:10 compute-1 sudo[110283]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:34:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:10.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:34:10 compute-1 sudo[110361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqmdcnsapqycuafkcymlmtzukiqsuyun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976849.5457332-1008-90476290475352/AnsiballZ_file.py'
Nov 24 09:34:10 compute-1 sudo[110361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:10 compute-1 python3.9[110363]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:10 compute-1 sudo[110361]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:10.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:10 compute-1 ceph-mon[80009]: pgmap v208: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:11 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:11 compute-1 sudo[110514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glcumsswovibtvxpatcfzrpdrobmshsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976850.9702604-1047-226376439588031/AnsiballZ_command.py'
Nov 24 09:34:11 compute-1 sudo[110514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:11 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:11 compute-1 python3.9[110516]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:34:11 compute-1 sudo[110514]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:11 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:12 compute-1 sudo[110670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owkuygfpoqgoqdmrwnlunpaxcytxwsoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976851.7752542-1071-198093717504386/AnsiballZ_blockinfile.py'
Nov 24 09:34:12 compute-1 sudo[110670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:12.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:12 compute-1 python3.9[110672]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:12 compute-1 sudo[110670]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:12.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:12 compute-1 ceph-mon[80009]: pgmap v209: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:34:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:34:13 compute-1 sudo[110822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uczubaeymqzvoeazkbpbjtmklaszxfgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976852.7249508-1098-256924327521044/AnsiballZ_file.py'
Nov 24 09:34:13 compute-1 sudo[110822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:13 compute-1 python3.9[110824]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:13 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:13 compute-1 sudo[110822]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:13 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:13 compute-1 sudo[110974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tizglxbolcezzvovzidtlwnlgznbvddn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976853.352062-1098-197986980718278/AnsiballZ_file.py'
Nov 24 09:34:13 compute-1 sudo[110974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:13 compute-1 python3.9[110976]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:13 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:13 compute-1 sudo[110974]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:14.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:34:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:14.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:34:14 compute-1 sudo[111127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfpkqnumrismzdlxcwbcqokmmglnplzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976854.3171918-1143-178265682043896/AnsiballZ_mount.py'
Nov 24 09:34:14 compute-1 sudo[111127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:14 compute-1 ceph-mon[80009]: pgmap v210: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:14 compute-1 python3.9[111129]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 09:34:14 compute-1 sudo[111127]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:15 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:15 compute-1 sudo[111279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnciiiefuuoiweeenjcabqyiepniihia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976855.086704-1143-139912577875703/AnsiballZ_mount.py'
Nov 24 09:34:15 compute-1 sudo[111279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:15 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:34:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:34:15 compute-1 python3.9[111281]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 09:34:15 compute-1 sudo[111279]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:15 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:34:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:16.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:16 compute-1 sshd-session[103982]: Connection closed by 192.168.122.30 port 53002
Nov 24 09:34:16 compute-1 sshd-session[103979]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:34:16 compute-1 systemd[1]: session-43.scope: Deactivated successfully.
Nov 24 09:34:16 compute-1 systemd[1]: session-43.scope: Consumed 27.714s CPU time.
Nov 24 09:34:16 compute-1 systemd-logind[823]: Session 43 logged out. Waiting for processes to exit.
Nov 24 09:34:16 compute-1 systemd-logind[823]: Removed session 43.
Nov 24 09:34:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:16.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:16 compute-1 ceph-mon[80009]: pgmap v211: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:17 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:17 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:17 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:17 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:34:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:18.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:18.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:34:18 compute-1 ceph-mon[80009]: pgmap v212: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:34:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:19 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:19 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:19 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:20 compute-1 sudo[111309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:34:20 compute-1 sudo[111309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:34:20 compute-1 sudo[111309]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:20.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:34:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:20.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:20 compute-1 ceph-mon[80009]: pgmap v213: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:21 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:21 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:21 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:21 compute-1 sshd-session[111334]: Accepted publickey for zuul from 192.168.122.30 port 60146 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:34:21 compute-1 systemd-logind[823]: New session 44 of user zuul.
Nov 24 09:34:21 compute-1 systemd[1]: Started Session 44 of User zuul.
Nov 24 09:34:21 compute-1 sshd-session[111334]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:34:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:34:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:22.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:34:22 compute-1 sudo[111488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpdamcqfhajmjcebtrmjlneyqdnmnlaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976862.0425456-19-150920580069908/AnsiballZ_tempfile.py'
Nov 24 09:34:22 compute-1 sudo[111488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:22.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:34:22 compute-1 python3.9[111490]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 24 09:34:22 compute-1 sudo[111488]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:22 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:34:22 compute-1 ceph-mon[80009]: pgmap v214: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:34:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:23 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:23 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea800032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:23 compute-1 sudo[111640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scteqiybzsmdlmuksravapuyxuwzhwic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976862.9946196-55-235627928731682/AnsiballZ_stat.py'
Nov 24 09:34:23 compute-1 sudo[111640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:23 compute-1 python3.9[111642]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:34:23 compute-1 sudo[111640]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:23 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:24.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:34:24 compute-1 sudo[111795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfwawunmivyrbxozcnyebfsrmcnascse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976863.8341587-79-121993762161122/AnsiballZ_slurp.py'
Nov 24 09:34:24 compute-1 sudo[111795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:24 compute-1 python3.9[111797]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 24 09:34:24 compute-1 sudo[111795]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:24.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093424 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:34:24 compute-1 ceph-mon[80009]: pgmap v215: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:24 compute-1 sudo[111947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cveoqdckfdwhsiezkcowxlremaivdvgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976864.7154546-103-99059411233140/AnsiballZ_stat.py'
Nov 24 09:34:24 compute-1 sudo[111947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:25 compute-1 python3.9[111949]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.ihbsxqqe follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:34:25 compute-1 sudo[111947]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:25 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:25 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:25 compute-1 sudo[112072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axywublpxdtehrhfmwtnkxnwhxpztdeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976864.7154546-103-99059411233140/AnsiballZ_copy.py'
Nov 24 09:34:25 compute-1 sudo[112072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:25 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea800032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:25 compute-1 python3.9[112074]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.ihbsxqqe mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976864.7154546-103-99059411233140/.source.ihbsxqqe _original_basename=.hatlwqqf follow=False checksum=f51461b6f6171622d95e6dfd4bfc1927ea303d6e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:25 compute-1 sudo[112072]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:34:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:26.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:34:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:26.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:26 compute-1 sudo[112225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdgslhtbwulyzgwipjrutmibxcfmpesd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976866.1749935-148-120204754099771/AnsiballZ_setup.py'
Nov 24 09:34:26 compute-1 sudo[112225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:26 compute-1 ceph-mon[80009]: pgmap v216: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:27 compute-1 python3.9[112227]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:34:27 compute-1 sudo[112225]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:27 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:27 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:27 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 24 09:34:27 compute-1 sudo[112379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgytdmnylceawbgwagzfqdxbgamleqac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976867.3583584-173-190577421452852/AnsiballZ_blockinfile.py'
Nov 24 09:34:27 compute-1 sudo[112379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:27 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:34:27 compute-1 python3.9[112381]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDnPh2FYKCqB5Rxe2d73LAea+vmvipLFksP43GM8QFNtdkL9UXsBFKIlbvhCArQ0+q5/EXcOy13rEWVabeuzYdek35bvnCWnqrlaoEFqEV7Y7SDrutMHxHvnLthse/1jj4AvtjvQXG0bKruDgtz2CBksRaKWTEHPZHLOYOwWLGogWVazacOPagjlMQ9UdpYvwfqgKnjMpl6sHCvQC7C0kTNvrYrrhUZqReUWyggx/XcC/YJvSYvMW1wNRhYmypPzEXu8QXt0ywHvCucILZcZqBE1/lKAUCLqDEkB/xpMnKiZ/EmDtyv8AP7H231WeEoaU4BziaD2jSd/H6lr2JJwpKBlrGkti8gQpJHtDytAtbVtrLD5fW+1GkobqN/2GXjNnvzuLB36OhT4nysfJ6BPP3sgaaZ2RJSzP5hI3jfFVn/NYjbaRIoo+tOB50PJeIPj6c5uMX+Qcb2V6EOUwogIRhtwN7A1XHh8dQPCUVYCUmNIq1K7NZ3Hxf+BqhVsSj6SK0=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINu5/fR7YXhb91kwrOd7U+mnimdcm+o61ru6zTYmFIZO
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJFgzeIWa1Ve+dIxs7Pjz8TnBGpgkm/KAIeb7PoVU+QfPqP68TrTBJjwgq/5DOilENFVsFmr+3WdERS0uMWfxXo=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyBn9mTS8EhHsIKYO0tLgGtKOo5KK33vyjqFzXOs43ZcW8GNKmSQ7DXnq80OCGGkDE9aL5uVEQ82MaYpYE8rZVZGrTF1heqhLe2ModNgcaUA+dBOzScRYEm5JAsj6ajcAc7fiPseazHiC80XQlEo+bwF6XHf/i9t7MHMqQCKdM+qnsEd6JeYe+Zy6X7Web4mN4mbvDaHxjBAdxuR0g0bKoYRjFeeNQyQQ/2Fpsa/i/ZqFVU59TrQ1vm9wLk9wJQd7mBQsdxizekzHGMkE5Ub8VdN43iscVyKKhZWeUOyEK2HASt+n/fHjIsFD65a4GLiHFuJ8DJ4CrWFrwt1RIXLkNFOImjH5kiMO55d/Qogf5F33Mkto3ntPQP/tShtBEDIzc9JCE7vYLFjk/bMSUcK9/u41E8suBkZBHnzXC8+eB6XCoYYNxA+cowaSg5+YCSxL6yON9u34LV+i3jZosNYNivLHjOmOsyGEs/Az6NLkHYzxYCHY042etu9Py2/lONrk=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDX1cMQF3siye3qNUS07EBS+iX+poG1/aIqFR51WsltV
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFy78zaPxoZwc0f5pE0EdJcb6EwSlQGeMhelmYFBlrBeD2fH3vCrxrTbbmmM9DSQFtIo8sNV7/s7CV9dvbvMOzQ=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYj9G0Ft/Psyl/13EAEebfB7qR7surocLwWTVKKcclTBPrKIFnHkxuGFUee1a6DQGup+ENEdhJN2MOXFv/jskxJUsoILDHuvx17jHKFvMSR7ycfe+1umEqgfKCHGxlLXobZjj7t2PzAveNkTk+zeX8pqLH1q86LI01fH0n3jdSksqEXvxbiDLMspPTM3alGxNI4pztPvN3i+0qfCPD5SL9dhFsP4C8IVTBWAM4g7Qd6LyKhx+MVoEVecLL6jsM8z+zArVsZKFcZOKFpl0MTeWdpNR0b4u0ILO59y38D/dVoM45NRDpIi7HyoS7TsD0XpP+3zP8hGo4M35QU+a9YRmdCaUChLmqjfUprjnQrusAuQfP406rQ3JlgWs3YAwF0IPhvHv57pPWm3xGwKPFpO0Jguw5cQdZZvYk4tS9JvlCz5+Yyfm3+9T+k1KLfcZ+zlvOYKz+BXNiPfk1bF9ML7/KEIyJjGf32o5nEp0H1sH24wrSIroXa+woila4KBTffe8=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFQe/vdPzZywzEntIohbfJ9grfNBp30Atbg8qy8BeQ3c
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPhaUxRkg9RrudtznCKCcwWhf1hoSfCyCfTHlGI62beVEpMD4en9bzfcuYnvB/Qm3vgzgUVMpS53KCL9bmqBfT8=
                                              create=True mode=0644 path=/tmp/ansible.ihbsxqqe state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:27 compute-1 sudo[112379]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:28.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:34:28 compute-1 sudo[112532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amuskjivzlooiuovekdkwdmauwwjrzzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976868.1639037-197-81037167057033/AnsiballZ_command.py'
Nov 24 09:34:28 compute-1 sudo[112532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:28.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:28 compute-1 python3.9[112534]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.ihbsxqqe' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:34:28 compute-1 sudo[112532]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:28 compute-1 ceph-mon[80009]: pgmap v217: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:34:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:29 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:29 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:29 compute-1 sudo[112686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtzmhvebmmgzzbkrpozjzdsguwllznvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976869.0431252-221-94513645681124/AnsiballZ_file.py'
Nov 24 09:34:29 compute-1 sudo[112686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:29 compute-1 python3.9[112688]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.ihbsxqqe state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:29 compute-1 sudo[112686]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:29 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:30 compute-1 sshd-session[111338]: Connection closed by 192.168.122.30 port 60146
Nov 24 09:34:30 compute-1 sshd-session[111334]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:34:30 compute-1 systemd[1]: session-44.scope: Deactivated successfully.
Nov 24 09:34:30 compute-1 systemd[1]: session-44.scope: Consumed 4.865s CPU time.
Nov 24 09:34:30 compute-1 systemd-logind[823]: Session 44 logged out. Waiting for processes to exit.
Nov 24 09:34:30 compute-1 systemd-logind[823]: Removed session 44.
Nov 24 09:34:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:34:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:30.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:34:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:34:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:34:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:30.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:30 compute-1 ceph-mon[80009]: pgmap v218: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:34:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:34:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:31 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:31 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:31 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:32.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:34:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:32.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:34:33 compute-1 ceph-mon[80009]: pgmap v219: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:34:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:33 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:33 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:33 : epoch 69242647 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:34:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:33 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:34.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:34.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:35 compute-1 ceph-mon[80009]: pgmap v220: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:34:35 compute-1 sshd-session[112716]: Accepted publickey for zuul from 192.168.122.30 port 33006 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:34:35 compute-1 systemd-logind[823]: New session 45 of user zuul.
Nov 24 09:34:35 compute-1 systemd[1]: Started Session 45 of User zuul.
Nov 24 09:34:35 compute-1 sshd-session[112716]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:34:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:35 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:35 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:35 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:36 compute-1 python3.9[112869]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:34:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:36.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:36 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:36 : epoch 69242647 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:34:36 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:36 : epoch 69242647 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:34:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:36.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:37 compute-1 ceph-mon[80009]: pgmap v221: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:34:37 compute-1 sudo[113024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htgbnlvucvkxicvozrvrbxzgfqgztdrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976876.6255662-57-68908032217495/AnsiballZ_systemd.py'
Nov 24 09:34:37 compute-1 sudo[113024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:37 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:37 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:37 compute-1 python3.9[113026]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 24 09:34:37 compute-1 sudo[113024]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:37 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:37 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:34:38 compute-1 sudo[113179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfsjwnagpsnsbuqeeximskxwhqqvtpet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976877.7692842-81-102016383252081/AnsiballZ_systemd.py'
Nov 24 09:34:38 compute-1 sudo[113179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:38.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:38 compute-1 python3.9[113181]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:34:38 compute-1 sudo[113179]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:34:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:38.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:34:39 compute-1 ceph-mon[80009]: pgmap v222: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:34:39 compute-1 sudo[113332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqrpwshkxembpichstqourahizbepsqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976878.7027655-108-35736042496563/AnsiballZ_command.py'
Nov 24 09:34:39 compute-1 sudo[113332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:39 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:39 compute-1 python3.9[113334]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:34:39 compute-1 sudo[113332]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:39 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:39 : epoch 69242647 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:34:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:39 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:39 compute-1 sudo[113486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smlyjzflvwqrdwhpkazyxtwdqfkcrgkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976879.5612547-132-238533235452729/AnsiballZ_stat.py'
Nov 24 09:34:39 compute-1 sudo[113486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:40 compute-1 python3.9[113488]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:34:40 compute-1 sudo[113486]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:40 compute-1 sudo[113489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:34:40 compute-1 sudo[113489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:34:40 compute-1 sudo[113489]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.002000047s ======
Nov 24 09:34:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:40.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Nov 24 09:34:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:40.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:40 compute-1 sudo[113663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuzopqxcnpjgmiqurjltxummxjrlomfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976880.4131715-159-102759627229957/AnsiballZ_file.py'
Nov 24 09:34:40 compute-1 sudo[113663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:41 compute-1 python3.9[113665]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:41 compute-1 sudo[113663]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:41 compute-1 ceph-mon[80009]: pgmap v223: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:34:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:41 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:41 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:41 compute-1 sshd-session[112719]: Connection closed by 192.168.122.30 port 33006
Nov 24 09:34:41 compute-1 sshd-session[112716]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:34:41 compute-1 systemd[1]: session-45.scope: Deactivated successfully.
Nov 24 09:34:41 compute-1 systemd[1]: session-45.scope: Consumed 3.556s CPU time.
Nov 24 09:34:41 compute-1 systemd-logind[823]: Session 45 logged out. Waiting for processes to exit.
Nov 24 09:34:41 compute-1 systemd-logind[823]: Removed session 45.
Nov 24 09:34:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:41 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:42.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:34:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:42.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:34:42 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 09:34:42 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:34:43 compute-1 ceph-mon[80009]: pgmap v224: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:34:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:43 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:43 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70001d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:43 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:44.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:44.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093444 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:34:45 compute-1 sudo[113695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:34:45 compute-1 sudo[113695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:34:45 compute-1 sudo[113695]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:45 compute-1 sudo[113720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:34:45 compute-1 sudo[113720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:34:45 compute-1 ceph-mon[80009]: pgmap v225: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:34:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:45 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:34:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:34:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:45 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:45 compute-1 sudo[113720]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:34:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:34:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:34:45 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:34:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:34:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:34:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:34:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:34:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:34:45 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:34:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:34:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:34:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:45 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70001d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:46 compute-1 ceph-mon[80009]: pgmap v226: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:34:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:34:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:34:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:34:46 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:34:46 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:34:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:34:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:34:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:34:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:34:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:46.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:34:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:46.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:47 compute-1 sshd-session[113777]: Accepted publickey for zuul from 192.168.122.30 port 41202 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:34:47 compute-1 systemd-logind[823]: New session 46 of user zuul.
Nov 24 09:34:47 compute-1 systemd[1]: Started Session 46 of User zuul.
Nov 24 09:34:47 compute-1 sshd-session[113777]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:34:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:47 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70001d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:47 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:47 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002350 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:47 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:34:48 compute-1 python3.9[113930]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.212263) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976888212315, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2206, "num_deletes": 252, "total_data_size": 6015776, "memory_usage": 6075992, "flush_reason": "Manual Compaction"}
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976888230898, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 2326024, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10754, "largest_seqno": 12955, "table_properties": {"data_size": 2319512, "index_size": 3391, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16211, "raw_average_key_size": 20, "raw_value_size": 2305312, "raw_average_value_size": 2870, "num_data_blocks": 151, "num_entries": 803, "num_filter_entries": 803, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976683, "oldest_key_time": 1763976683, "file_creation_time": 1763976888, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 18681 microseconds, and 5198 cpu microseconds.
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.230954) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 2326024 bytes OK
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.230975) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.233473) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.233502) EVENT_LOG_v1 {"time_micros": 1763976888233496, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.233522) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 6005986, prev total WAL file size 6005986, number of live WAL files 2.
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.235184) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(2271KB)], [21(13MB)]
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976888235251, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 16606350, "oldest_snapshot_seqno": -1}
Nov 24 09:34:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:48.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4474 keys, 14776348 bytes, temperature: kUnknown
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976888361228, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 14776348, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14742093, "index_size": 21985, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11205, "raw_key_size": 112722, "raw_average_key_size": 25, "raw_value_size": 14656155, "raw_average_value_size": 3275, "num_data_blocks": 942, "num_entries": 4474, "num_filter_entries": 4474, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763976888, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.361509) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14776348 bytes
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.362637) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 131.7 rd, 117.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 13.6 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(13.5) write-amplify(6.4) OK, records in: 4899, records dropped: 425 output_compression: NoCompression
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.362658) EVENT_LOG_v1 {"time_micros": 1763976888362649, "job": 10, "event": "compaction_finished", "compaction_time_micros": 126073, "compaction_time_cpu_micros": 28584, "output_level": 6, "num_output_files": 1, "total_output_size": 14776348, "num_input_records": 4899, "num_output_records": 4474, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976888363455, "job": 10, "event": "table_file_deletion", "file_number": 23}
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976888366066, "job": 10, "event": "table_file_deletion", "file_number": 21}
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.235084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.366114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.366119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.366120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.366121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:34:48 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.366123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:34:48 compute-1 ceph-mon[80009]: pgmap v227: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:34:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:48.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:34:48 compute-1 sudo[114085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbourxcosgpjcdaikufibonryblmvnva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976888.71823-63-210539308761598/AnsiballZ_setup.py'
Nov 24 09:34:48 compute-1 sudo[114085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:49 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:49 compute-1 python3.9[114087]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:34:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:49 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70001d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:49 compute-1 sudo[114085]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:49 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:49 compute-1 sudo[114170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtreoqetvmkwjxvfglbsuqqdzuccucnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976888.71823-63-210539308761598/AnsiballZ_dnf.py'
Nov 24 09:34:49 compute-1 sudo[114170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:50 compute-1 python3.9[114172]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 09:34:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:50.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:50 compute-1 ceph-mon[80009]: pgmap v228: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:34:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:34:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:50.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:34:50 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:34:50 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:34:50 compute-1 sudo[114174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:34:50 compute-1 sudo[114174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:34:50 compute-1 sudo[114174]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:51 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002350 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:51 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:51 compute-1 sudo[114170]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:51 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:34:51 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:34:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:51 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:52.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:52 compute-1 python3.9[114349]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:34:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:52.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:52 compute-1 ceph-mon[80009]: pgmap v229: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:34:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:34:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:53 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:53 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002350 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:53 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:53 compute-1 python3.9[114500]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 09:34:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:54.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:34:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:54.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:54 compute-1 python3.9[114651]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:34:55 compute-1 ceph-mon[80009]: pgmap v230: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:34:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:55 compute-1 python3.9[114801]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:34:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002350 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:56 compute-1 sshd-session[113780]: Connection closed by 192.168.122.30 port 41202
Nov 24 09:34:56 compute-1 sshd-session[113777]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:34:56 compute-1 systemd[1]: session-46.scope: Deactivated successfully.
Nov 24 09:34:56 compute-1 systemd[1]: session-46.scope: Consumed 5.690s CPU time.
Nov 24 09:34:56 compute-1 systemd-logind[823]: Session 46 logged out. Waiting for processes to exit.
Nov 24 09:34:56 compute-1 systemd-logind[823]: Removed session 46.
Nov 24 09:34:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:56.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:34:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:56.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:57 compute-1 ceph-mon[80009]: pgmap v231: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:34:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:57 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:57 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:34:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:57 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:58.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:34:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:58.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:59 compute-1 ceph-mon[80009]: pgmap v232: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:34:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:59 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:59 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:59 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.210086) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976900210144, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 379, "num_deletes": 251, "total_data_size": 444978, "memory_usage": 452760, "flush_reason": "Manual Compaction"}
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976900234050, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 294369, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12960, "largest_seqno": 13334, "table_properties": {"data_size": 292118, "index_size": 415, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5388, "raw_average_key_size": 17, "raw_value_size": 287616, "raw_average_value_size": 955, "num_data_blocks": 18, "num_entries": 301, "num_filter_entries": 301, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976888, "oldest_key_time": 1763976888, "file_creation_time": 1763976900, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 24030 microseconds, and 1895 cpu microseconds.
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.234119) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 294369 bytes OK
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.234141) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.235860) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.235878) EVENT_LOG_v1 {"time_micros": 1763976900235873, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.235898) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 442470, prev total WAL file size 442470, number of live WAL files 2.
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.236391) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(287KB)], [24(14MB)]
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976900236567, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 15070717, "oldest_snapshot_seqno": -1}
Nov 24 09:35:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:00.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:00 compute-1 sudo[114830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:35:00 compute-1 sudo[114830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:35:00 compute-1 sudo[114830]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:35:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4262 keys, 12940962 bytes, temperature: kUnknown
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976900396578, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 12940962, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12909894, "index_size": 19310, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10693, "raw_key_size": 109225, "raw_average_key_size": 25, "raw_value_size": 12829393, "raw_average_value_size": 3010, "num_data_blocks": 815, "num_entries": 4262, "num_filter_entries": 4262, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763976900, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.396817) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 12940962 bytes
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.398553) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 94.2 rd, 80.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 14.1 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(95.2) write-amplify(44.0) OK, records in: 4775, records dropped: 513 output_compression: NoCompression
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.398572) EVENT_LOG_v1 {"time_micros": 1763976900398563, "job": 12, "event": "compaction_finished", "compaction_time_micros": 160046, "compaction_time_cpu_micros": 31457, "output_level": 6, "num_output_files": 1, "total_output_size": 12940962, "num_input_records": 4775, "num_output_records": 4262, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976900398746, "job": 12, "event": "table_file_deletion", "file_number": 26}
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976900401031, "job": 12, "event": "table_file_deletion", "file_number": 24}
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.236310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.401142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.401148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.401150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.401151) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:35:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.401153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:35:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:35:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:00.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:35:01 compute-1 ceph-mon[80009]: pgmap v233: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:35:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:01 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:01 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:01 compute-1 sshd-session[114855]: Accepted publickey for zuul from 192.168.122.30 port 55520 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:35:01 compute-1 systemd-logind[823]: New session 47 of user zuul.
Nov 24 09:35:01 compute-1 systemd[1]: Started Session 47 of User zuul.
Nov 24 09:35:01 compute-1 sshd-session[114855]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:35:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:01 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:35:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:02.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:35:02 compute-1 python3.9[115009]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:35:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:02.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:35:03 compute-1 ceph-mon[80009]: pgmap v234: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:35:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:03 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70003fd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:03 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:03 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:04 compute-1 sudo[115164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylhnhkrlddmbmdzopszmufhmrxazviwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976903.7611165-115-228534478476340/AnsiballZ_file.py'
Nov 24 09:35:04 compute-1 sudo[115164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:04 compute-1 ceph-mon[80009]: pgmap v235: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:04.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:04 compute-1 python3.9[115166]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:04 compute-1 sudo[115164]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:35:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:04.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:35:04 compute-1 sudo[115316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxgetehfookwftdbxnoywinicarvfqop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976904.471075-115-151052013647803/AnsiballZ_file.py'
Nov 24 09:35:04 compute-1 sudo[115316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:04 compute-1 python3.9[115318]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:04 compute-1 sudo[115316]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:05 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:05 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70003fd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:05 compute-1 sudo[115468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdwxtqudhkoflerekwnnwhbasosrjgns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976905.1023533-154-117669381383635/AnsiballZ_stat.py'
Nov 24 09:35:05 compute-1 sudo[115468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:05 compute-1 python3.9[115470]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:05 compute-1 sudo[115468]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:05 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:06 compute-1 sudo[115592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uezbwyixuyyhmpjkmoavxekxobibnkop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976905.1023533-154-117669381383635/AnsiballZ_copy.py'
Nov 24 09:35:06 compute-1 sudo[115592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:35:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:06.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:35:06 compute-1 python3.9[115594]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976905.1023533-154-117669381383635/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=1aed28cbd157b82f7069a716a80af3c0e21ff713 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:06 compute-1 sudo[115592]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:06 compute-1 ceph-mon[80009]: pgmap v236: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:06.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:06 compute-1 sudo[115744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxciarwtuulluzpztqohrfdposhbbhqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976906.622611-154-42050091528181/AnsiballZ_stat.py'
Nov 24 09:35:06 compute-1 sudo[115744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:07 compute-1 python3.9[115746]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:07 compute-1 sudo[115744]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:07 compute-1 sudo[115867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rorixpoiutwegylcleulmafiphksznsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976906.622611-154-42050091528181/AnsiballZ_copy.py'
Nov 24 09:35:07 compute-1 sudo[115867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:07 compute-1 python3.9[115869]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976906.622611-154-42050091528181/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=3228e523f8b01d6a11882d8cc1d2d959030dab43 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:07 compute-1 sudo[115867]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:07 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:35:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70003fd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:08 compute-1 sudo[116020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftttaykhyoanudssvuxocerpqzwsocdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976907.7451613-154-264373962335238/AnsiballZ_stat.py'
Nov 24 09:35:08 compute-1 sudo[116020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:08 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093508 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:35:08 compute-1 python3.9[116022]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:08 compute-1 sudo[116020]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:08.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:08 compute-1 ceph-mon[80009]: pgmap v237: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:35:08 compute-1 sudo[116143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpfxdopdftpytsxxchkzvbmsmtkrndjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976907.7451613-154-264373962335238/AnsiballZ_copy.py'
Nov 24 09:35:08 compute-1 sudo[116143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:08.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:08 compute-1 python3.9[116145]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976907.7451613-154-264373962335238/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=d8cbf8f331cdf03d2c25f53533d79c8d0bfed30c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:08 compute-1 sudo[116143]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:09 compute-1 sudo[116295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ictprljxxoobulzmwubasxuhxemktghe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976908.9839988-282-102389913107973/AnsiballZ_file.py'
Nov 24 09:35:09 compute-1 sudo[116295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:09 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa400add0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:09 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:09 compute-1 python3.9[116297]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:09 compute-1 sudo[116295]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:09 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:09 compute-1 sudo[116448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoghjthnhmdbcwppvylcdaybtgmwinss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976909.6203673-282-191906296473103/AnsiballZ_file.py'
Nov 24 09:35:09 compute-1 sudo[116448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:10 compute-1 python3.9[116450]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:10 compute-1 sudo[116448]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:10.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:10 compute-1 ceph-mon[80009]: pgmap v238: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:10 compute-1 sudo[116600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekczrfkpdtuhnuzhdvovzvpjxymnguyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976910.290428-325-48718569917240/AnsiballZ_stat.py'
Nov 24 09:35:10 compute-1 sudo[116600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:35:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:10.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:35:10 compute-1 python3.9[116602]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:10 compute-1 sudo[116600]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:11 compute-1 sudo[116723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whlwziztrvapheaaaxxtgjvoyiozqzvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976910.290428-325-48718569917240/AnsiballZ_copy.py'
Nov 24 09:35:11 compute-1 sudo[116723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:11 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:11 compute-1 python3.9[116725]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976910.290428-325-48718569917240/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=51a6c591c203944590268a477cdb8f6d7c46652a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:11 compute-1 sudo[116723]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:11 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa400add0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:11 compute-1 sudo[116875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wprzvkrtgbcoifufbjsxdfbrqnpkduhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976911.4913163-325-62791605727008/AnsiballZ_stat.py'
Nov 24 09:35:11 compute-1 sudo[116875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:11 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:11 compute-1 python3.9[116877]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:11 compute-1 sudo[116875]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:12 compute-1 sudo[116999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hekpedmrkbyamkiieautqxwdbbjemrzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976911.4913163-325-62791605727008/AnsiballZ_copy.py'
Nov 24 09:35:12 compute-1 sudo[116999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:35:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:12.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:35:12 compute-1 python3.9[117001]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976911.4913163-325-62791605727008/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=a9a797b79c320330a0fbef3d6d785446f2b400de backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:12 compute-1 sudo[116999]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:12 compute-1 ceph-mon[80009]: pgmap v239: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:35:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:12.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:35:12 compute-1 sudo[117151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctjoatqhdrdhxadweawfwpltirmnvuhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976912.6152792-325-3391753012308/AnsiballZ_stat.py'
Nov 24 09:35:12 compute-1 sudo[117151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:13 compute-1 python3.9[117153]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:13 compute-1 sudo[117151]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:13 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70003fd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:13 compute-1 sudo[117275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxifbgfnxibkiplenbazhfjytougrevk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976912.6152792-325-3391753012308/AnsiballZ_copy.py'
Nov 24 09:35:13 compute-1 sudo[117275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:13 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:13 compute-1 python3.9[117277]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976912.6152792-325-3391753012308/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=293490359390b00694df182b7f282079077f474f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:13 compute-1 sudo[117275]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:13 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:13 compute-1 sudo[117429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxojzfshnsjprpgoqzzgcpxxaogcujqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976913.7617862-449-183448253357019/AnsiballZ_file.py'
Nov 24 09:35:13 compute-1 sudo[117429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:14 compute-1 python3.9[117431]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:14 compute-1 sudo[117429]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:14.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:14 compute-1 ceph-mon[80009]: pgmap v240: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:35:14 compute-1 sudo[117581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urtnpzzkugdhtdeatjwknbnnygdtakmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976914.3383837-449-73012407054851/AnsiballZ_file.py'
Nov 24 09:35:14 compute-1 sudo[117581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:14.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:14 compute-1 python3.9[117583]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:14 compute-1 sudo[117581]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:15 compute-1 sudo[117733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtexfumfydoqwjxyrurregcdcgjlnrdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976914.9394274-494-56946476035865/AnsiballZ_stat.py'
Nov 24 09:35:15 compute-1 sudo[117733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:15 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:15 compute-1 python3.9[117735]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:15 compute-1 sudo[117733]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:35:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:35:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:15 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea840023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:35:15 compute-1 sudo[117856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sizjpvrxjrjcxkwhuruptdwlorofvctq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976914.9394274-494-56946476035865/AnsiballZ_copy.py'
Nov 24 09:35:15 compute-1 sudo[117856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:15 compute-1 python3.9[117858]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976914.9394274-494-56946476035865/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=bc8679c076a79311d7c86b9b1a6f9b2a996ee747 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:15 compute-1 sudo[117856]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:15 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:16 compute-1 sudo[118009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wovxrgwmthnbpesrwljqxtujwmwqbnaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976915.983199-494-144141505727691/AnsiballZ_stat.py'
Nov 24 09:35:16 compute-1 sudo[118009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:16.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:16 compute-1 python3.9[118011]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:16 compute-1 sudo[118009]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:16 compute-1 ceph-mon[80009]: pgmap v241: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:35:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:16.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:16 compute-1 sudo[118132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utoymzxtiaidxjitekswinhvmoikaruh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976915.983199-494-144141505727691/AnsiballZ_copy.py'
Nov 24 09:35:16 compute-1 sudo[118132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:16 compute-1 python3.9[118134]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976915.983199-494-144141505727691/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=a9a797b79c320330a0fbef3d6d785446f2b400de backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:16 compute-1 sudo[118132]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:17 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:17 : epoch 69242647 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:35:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:17 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:17 compute-1 sudo[118284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjxxfglucpgjymtxalzhpfuqdecvybwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976917.1200743-494-130064663828823/AnsiballZ_stat.py'
Nov 24 09:35:17 compute-1 sudo[118284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:17 compute-1 python3.9[118286]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:17 compute-1 sudo[118284]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:17 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:35:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:17 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea840023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:17 compute-1 sudo[118408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exqysxmtwkyjyyhomedfhuiaornqmgbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976917.1200743-494-130064663828823/AnsiballZ_copy.py'
Nov 24 09:35:17 compute-1 sudo[118408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:18 compute-1 python3.9[118410]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976917.1200743-494-130064663828823/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=14c307e0f7068641dd695e1233929e25344f95a3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:18 compute-1 sudo[118408]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:35:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:18.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:35:18 compute-1 ceph-mon[80009]: pgmap v242: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:35:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:35:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:18.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:35:19 compute-1 sudo[118560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntzojmohkhnofwbbhempjzetunsgzmkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976918.9753926-644-39751870873185/AnsiballZ_file.py'
Nov 24 09:35:19 compute-1 sudo[118560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:19 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea840023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:19 compute-1 python3.9[118562]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:19 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:19 compute-1 sudo[118560]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:19 compute-1 sudo[118713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qffpwnrqrxffuvcgmjyzndwmtqwpkoqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976919.5814614-676-96888622235117/AnsiballZ_stat.py'
Nov 24 09:35:19 compute-1 sudo[118713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:19 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:20 compute-1 python3.9[118715]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:20 compute-1 sudo[118713]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:35:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:20.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:35:20 compute-1 sudo[118810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:35:20 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:20 : epoch 69242647 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:35:20 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:20 : epoch 69242647 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:35:20 compute-1 sudo[118810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:35:20 compute-1 sudo[118810]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:20 compute-1 sudo[118860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mncrwthbgjpluvobmgoznwgminvonhrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976919.5814614-676-96888622235117/AnsiballZ_copy.py'
Nov 24 09:35:20 compute-1 sudo[118860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:20 compute-1 python3.9[118863]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976919.5814614-676-96888622235117/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=544ccad07cd49583316075cf420b5b550bb4de77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:20 compute-1 ceph-mon[80009]: pgmap v243: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:35:20 compute-1 sudo[118860]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:20.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:21 compute-1 sudo[119013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-renvhcvvcylstarqgoxrnlziaiiyogsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976920.8296788-730-22562368533110/AnsiballZ_file.py'
Nov 24 09:35:21 compute-1 sudo[119013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:21 compute-1 python3.9[119015]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:21 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea840023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:21 compute-1 sudo[119013]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:21 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:21 compute-1 sudo[119165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyzqtpeadlxyndsicjbvmhayvcbkdhnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976921.445923-752-38312562008629/AnsiballZ_stat.py'
Nov 24 09:35:21 compute-1 sudo[119165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:21 compute-1 python3.9[119167]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:21 compute-1 sudo[119165]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:21 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:22 compute-1 sudo[119289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uivbdcmpiwtcqjnnwtthzqaythmldlxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976921.445923-752-38312562008629/AnsiballZ_copy.py'
Nov 24 09:35:22 compute-1 sudo[119289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:35:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:22.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:35:22 compute-1 python3.9[119291]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976921.445923-752-38312562008629/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=544ccad07cd49583316075cf420b5b550bb4de77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:22 compute-1 sudo[119289]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:22 compute-1 ceph-mon[80009]: pgmap v244: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:35:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:35:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:22.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:35:22 compute-1 sudo[119441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onsvtnzydaviuvnveksbnxrbjirqnjcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976922.5500848-798-264743010017913/AnsiballZ_file.py'
Nov 24 09:35:22 compute-1 sudo[119441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:22 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:35:22 compute-1 python3.9[119443]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:22 compute-1 sudo[119441]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:23 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:23 compute-1 sudo[119593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysgjhwvdoznikelqpblpqrpfzdsigulj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976923.1143703-818-250430191272835/AnsiballZ_stat.py'
Nov 24 09:35:23 compute-1 sudo[119593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:23 : epoch 69242647 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:35:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:23 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea840023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:23 compute-1 python3.9[119595]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:23 compute-1 sudo[119593]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:23 compute-1 sudo[119717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lolceyujtqzeddyotildxupesrnyaujs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976923.1143703-818-250430191272835/AnsiballZ_copy.py'
Nov 24 09:35:23 compute-1 sudo[119717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:23 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:24 compute-1 python3.9[119719]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976923.1143703-818-250430191272835/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=544ccad07cd49583316075cf420b5b550bb4de77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:24 compute-1 sudo[119717]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:24.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:24 compute-1 sudo[119869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxgxgcfosmwgiuiiedrtckjcwcwpxylb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976924.2722068-862-25314097891493/AnsiballZ_file.py'
Nov 24 09:35:24 compute-1 sudo[119869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:24 compute-1 ceph-mon[80009]: pgmap v245: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:35:24 compute-1 python3.9[119871]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:24.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:24 compute-1 sudo[119869]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:25 compute-1 sudo[120021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmxfxgqnhthqeegipzeibcichcjqgpwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976924.847026-884-129749894765204/AnsiballZ_stat.py'
Nov 24 09:35:25 compute-1 sudo[120021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:25 compute-1 python3.9[120023]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:25 compute-1 sudo[120021]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:25 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:25 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:25 compute-1 sudo[120144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjmltxisblxkyvwnzchdvkyspmqwxnrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976924.847026-884-129749894765204/AnsiballZ_copy.py'
Nov 24 09:35:25 compute-1 sudo[120144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:25 compute-1 python3.9[120146]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976924.847026-884-129749894765204/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=544ccad07cd49583316075cf420b5b550bb4de77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:25 compute-1 sudo[120144]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:25 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea840023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:26 compute-1 sudo[120297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwtsglfheewuzkjwxnvpuhmehtsjzmid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976925.992685-926-230580872279618/AnsiballZ_file.py'
Nov 24 09:35:26 compute-1 sudo[120297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:35:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:26.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:35:26 compute-1 python3.9[120299]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:26 compute-1 sudo[120297]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:26 compute-1 ceph-mon[80009]: pgmap v246: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:35:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:26.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:26 compute-1 sudo[120449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkwxaymkibhlaoxwcpvevuliebwmhbhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976926.6229327-949-69149988085739/AnsiballZ_stat.py'
Nov 24 09:35:26 compute-1 sudo[120449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:27 compute-1 python3.9[120451]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:27 compute-1 sudo[120449]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:27 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:27 compute-1 sudo[120572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjxozpsznvorsrmbxehhrxwurodaelui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976926.6229327-949-69149988085739/AnsiballZ_copy.py'
Nov 24 09:35:27 compute-1 sudo[120572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:27 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:27 compute-1 python3.9[120574]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976926.6229327-949-69149988085739/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=544ccad07cd49583316075cf420b5b550bb4de77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:27 compute-1 sudo[120572]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:35:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:27 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:28 compute-1 sudo[120726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubnwxpvxkfsetxhkxpgqgrfhfbeudkxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976927.7799718-992-151993254468789/AnsiballZ_file.py'
Nov 24 09:35:28 compute-1 sudo[120726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:28 compute-1 python3.9[120728]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:28 compute-1 sudo[120726]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:28.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:28 compute-1 sudo[120878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dadaoulcoszxtiagcctccylgmocauzrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976928.386976-1017-34809651986324/AnsiballZ_stat.py'
Nov 24 09:35:28 compute-1 sudo[120878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:28 compute-1 ceph-mon[80009]: pgmap v247: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:35:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:28.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:28 compute-1 python3.9[120880]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:28 compute-1 sudo[120878]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:29 compute-1 sudo[121001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khwzxdbnzfmuqnqaoqqvpeffqgomaaou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976928.386976-1017-34809651986324/AnsiballZ_copy.py'
Nov 24 09:35:29 compute-1 sudo[121001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:29 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:29 compute-1 python3.9[121003]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976928.386976-1017-34809651986324/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=544ccad07cd49583316075cf420b5b550bb4de77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:29 compute-1 sudo[121001]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:29 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:29 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:30 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093530 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:35:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:30.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:35:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:35:30 compute-1 ceph-mon[80009]: pgmap v248: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:35:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:35:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:30.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:31 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:31 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:31 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:32.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:32.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:32 compute-1 ceph-mon[80009]: pgmap v249: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:35:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:35:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:33 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:33 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:33 compute-1 sshd-session[114858]: Connection closed by 192.168.122.30 port 55520
Nov 24 09:35:33 compute-1 sshd-session[114855]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:35:33 compute-1 systemd[1]: session-47.scope: Deactivated successfully.
Nov 24 09:35:33 compute-1 systemd[1]: session-47.scope: Consumed 21.621s CPU time.
Nov 24 09:35:33 compute-1 systemd-logind[823]: Session 47 logged out. Waiting for processes to exit.
Nov 24 09:35:33 compute-1 systemd-logind[823]: Removed session 47.
Nov 24 09:35:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:33 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:34.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:34.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:34 compute-1 ceph-mon[80009]: pgmap v250: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:35:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:35 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:35 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:35 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:36.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:35:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:36.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:35:36 compute-1 ceph-mon[80009]: pgmap v251: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:35:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:37 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:37 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:37 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:35:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:37 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:38.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:38.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:38 compute-1 ceph-mon[80009]: pgmap v252: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:35:38 compute-1 sshd-session[121033]: Accepted publickey for zuul from 192.168.122.30 port 34980 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:35:38 compute-1 systemd-logind[823]: New session 48 of user zuul.
Nov 24 09:35:38 compute-1 systemd[1]: Started Session 48 of User zuul.
Nov 24 09:35:38 compute-1 sshd-session[121033]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:35:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:39 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:39 compute-1 sudo[121186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrecxypwxrpahcqirqocnrotuqfdgnlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976938.9905605-27-194750871039208/AnsiballZ_file.py'
Nov 24 09:35:39 compute-1 sudo[121186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:39 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:39 compute-1 python3.9[121188]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:39 compute-1 sudo[121186]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:39 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:35:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:40.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:35:40 compute-1 sudo[121339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bveoomnaffgpxuigwbspapnujffivcwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976939.957228-63-105236783014095/AnsiballZ_stat.py'
Nov 24 09:35:40 compute-1 sudo[121339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:40 compute-1 sudo[121342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:35:40 compute-1 sudo[121342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:35:40 compute-1 sudo[121342]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:40 compute-1 python3.9[121341]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:40 compute-1 sudo[121339]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:35:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:40.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:35:40 compute-1 ceph-mon[80009]: pgmap v253: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:35:41 compute-1 sudo[121487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpqipwhshrrydamuxjwwaeiesdscklhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976939.957228-63-105236783014095/AnsiballZ_copy.py'
Nov 24 09:35:41 compute-1 sudo[121487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:41 compute-1 python3.9[121489]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976939.957228-63-105236783014095/.source.conf _original_basename=ceph.conf follow=False checksum=35be1475912cb94f172c67eb64af3d903820f5fe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:41 compute-1 sudo[121487]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:41 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:41 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:41 compute-1 sudo[121639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckmybhowebphxkhtipbqphripanvihlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976941.3996968-63-262693485155571/AnsiballZ_stat.py'
Nov 24 09:35:41 compute-1 sudo[121639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:41 compute-1 python3.9[121641]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:41 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:41 compute-1 sudo[121639]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:42 compute-1 sudo[121763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cowouqjxrochepkjlmsctfiswxjtfwct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976941.3996968-63-262693485155571/AnsiballZ_copy.py'
Nov 24 09:35:42 compute-1 sudo[121763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:35:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:42.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:35:42 compute-1 python3.9[121765]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976941.3996968-63-262693485155571/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=5b68b38eb199b40419da711d3119a1cd74c89fee backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:42 compute-1 sudo[121763]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:42.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:42 compute-1 ceph-mon[80009]: pgmap v254: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:35:42 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:35:43 compute-1 sshd-session[121036]: Connection closed by 192.168.122.30 port 34980
Nov 24 09:35:43 compute-1 sshd-session[121033]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:35:43 compute-1 systemd[1]: session-48.scope: Deactivated successfully.
Nov 24 09:35:43 compute-1 systemd[1]: session-48.scope: Consumed 2.539s CPU time.
Nov 24 09:35:43 compute-1 systemd-logind[823]: Session 48 logged out. Waiting for processes to exit.
Nov 24 09:35:43 compute-1 systemd-logind[823]: Removed session 48.
Nov 24 09:35:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:43 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:43 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:43 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:44.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:44.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:44 compute-1 ceph-mon[80009]: pgmap v255: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:45 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:35:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:35:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:45 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:45 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:35:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:45 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:35:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:46.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:35:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:46.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:46 compute-1 ceph-mon[80009]: pgmap v256: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:47 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:47 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4001110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:47 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:35:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:47 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70001230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:35:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:48.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:35:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:48.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:48 compute-1 ceph-mon[80009]: pgmap v257: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:35:48 compute-1 sshd-session[121795]: Accepted publickey for zuul from 192.168.122.30 port 36428 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:35:48 compute-1 systemd-logind[823]: New session 49 of user zuul.
Nov 24 09:35:48 compute-1 systemd[1]: Started Session 49 of User zuul.
Nov 24 09:35:48 compute-1 sshd-session[121795]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:35:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:49 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:49 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:49 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4001110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:49 compute-1 python3.9[121948]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:35:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:35:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:50.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:35:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:50.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:50 compute-1 ceph-mon[80009]: pgmap v258: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:51 compute-1 sudo[122078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:35:51 compute-1 sudo[122126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xecghnscdktfcmxbmqawnlddmnebpfxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976950.601441-63-191553636425057/AnsiballZ_file.py'
Nov 24 09:35:51 compute-1 sudo[122078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:35:51 compute-1 sudo[122126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:51 compute-1 sudo[122078]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:51 compute-1 sudo[122131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:35:51 compute-1 sudo[122131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:35:51 compute-1 python3.9[122130]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:51 compute-1 sudo[122126]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:51 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4008e20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:51 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:51 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:35:51 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:35:51 compute-1 sudo[122131]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:51 compute-1 sudo[122336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvrdzkybzugvxoxkqdkmllehgnhiacla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976951.4027267-63-33317971309756/AnsiballZ_file.py'
Nov 24 09:35:51 compute-1 sudo[122336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:51 compute-1 python3.9[122338]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:51 compute-1 sudo[122336]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:51 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:35:52 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:35:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:35:52 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:35:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:35:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:35:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:35:52 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:35:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:35:52 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:35:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:35:52 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:35:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:35:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:52.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:35:52 compute-1 ceph-mon[80009]: pgmap v259: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:35:52 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:35:52 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:35:52 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:35:52 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:35:52 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:35:52 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:35:52 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:35:52 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:35:52 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:35:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:52.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:35:52 compute-1 python3.9[122489]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:35:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:53 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea78000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:53 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4008e20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:53 compute-1 sudo[122639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgqboabphwvioxmvagtehnbowiqnamnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976953.277773-132-245935859158139/AnsiballZ_seboolean.py'
Nov 24 09:35:53 compute-1 sudo[122639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:53 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:53 compute-1 python3.9[122641]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 24 09:35:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:35:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:54.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:35:54 compute-1 ceph-mon[80009]: pgmap v260: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:54.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:55 compute-1 sudo[122639]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea780016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4008e20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:35:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:56.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:35:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:56.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:56 compute-1 sudo[122797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqruwahxuwibgkwuwuipjdnusagiumcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976956.6223931-162-27159221836670/AnsiballZ_setup.py'
Nov 24 09:35:56 compute-1 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 24 09:35:56 compute-1 sudo[122797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:57 compute-1 ceph-mon[80009]: pgmap v261: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:57 compute-1 python3.9[122799]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:35:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:57 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:57 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:35:57 compute-1 sudo[122797]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:35:57 compute-1 sudo[122808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:35:57 compute-1 sudo[122808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:35:57 compute-1 sudo[122808]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:35:57 compute-1 sudo[122907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkevrfoabdmdszpwbskxcknpkbjhwdph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976956.6223931-162-27159221836670/AnsiballZ_dnf.py'
Nov 24 09:35:57 compute-1 sudo[122907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:57 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea780016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:58 compute-1 python3.9[122909]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:35:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:58.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:58 compute-1 ceph-mon[80009]: pgmap v262: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:35:58 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:35:58 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:35:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:35:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:58.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:59 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea780016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:59 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:59 compute-1 sudo[122907]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:59 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:00.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:36:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:36:00 compute-1 sudo[123061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlxebjvcjpafdbajqaddndriznvcvogz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976959.7853582-198-251645290226021/AnsiballZ_systemd.py'
Nov 24 09:36:00 compute-1 sudo[123061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:00 compute-1 ceph-mon[80009]: pgmap v263: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:36:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:36:00 compute-1 sudo[123064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:36:00 compute-1 sudo[123064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:36:00 compute-1 sudo[123064]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:00 compute-1 python3.9[123063]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 09:36:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:36:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:00.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:36:00 compute-1 sudo[123061]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:36:01 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea780016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:36:01 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea780016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:01 compute-1 sudo[123241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prmbksgzqzdwjrsrustdolrmkdnolscl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763976961.0296805-222-179969934123817/AnsiballZ_edpm_nftables_snippet.py'
Nov 24 09:36:01 compute-1 sudo[123241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:01 compute-1 python3[123243]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 24 09:36:01 compute-1 sudo[123241]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:36:01 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:02 compute-1 sudo[123394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pffzkxdypqxposktkytbqxrohvsoyzjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976962.0559378-249-250638170281641/AnsiballZ_file.py'
Nov 24 09:36:02 compute-1 sudo[123394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:02.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:02 compute-1 python3.9[123396]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:02 compute-1 sudo[123394]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:02 compute-1 ceph-mon[80009]: pgmap v264: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:36:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:02.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:36:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:36:03 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:03 compute-1 sudo[123546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjgzcbjfeljqkvhzuspmxsttxtldnshg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976963.0110064-273-275734482883875/AnsiballZ_stat.py'
Nov 24 09:36:03 compute-1 sudo[123546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:03 compute-1 kernel: ganesha.nfsd[121791]: segfault at 50 ip 00007feb4d57f32e sp 00007feb1d7f9210 error 4 in libntirpc.so.5.8[7feb4d564000+2c000] likely on CPU 4 (core 0, socket 4)
Nov 24 09:36:03 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:36:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:36:03 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy ignored for local
Nov 24 09:36:03 compute-1 systemd[1]: Started Process Core Dump (PID 123549/UID 0).
Nov 24 09:36:03 compute-1 python3.9[123548]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:03 compute-1 sudo[123546]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:03 compute-1 sudo[123627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mihqdkucsdbtyremppaikqpilkewmnfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976963.0110064-273-275734482883875/AnsiballZ_file.py'
Nov 24 09:36:03 compute-1 sudo[123627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:04 compute-1 python3.9[123629]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:04 compute-1 sudo[123627]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:04.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:04 compute-1 ceph-mon[80009]: pgmap v265: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:36:04 compute-1 systemd-coredump[123550]: Process 101625 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 63:
                                                    #0  0x00007feb4d57f32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 24 09:36:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:04.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:04 compute-1 sudo[123780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhrbcioqadezvapgqfruxhprgffmtrbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976964.5351443-309-177521094321827/AnsiballZ_stat.py'
Nov 24 09:36:04 compute-1 sudo[123780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:04 compute-1 systemd[1]: systemd-coredump@2-123549-0.service: Deactivated successfully.
Nov 24 09:36:04 compute-1 systemd[1]: systemd-coredump@2-123549-0.service: Consumed 1.260s CPU time.
Nov 24 09:36:04 compute-1 podman[123786]: 2025-11-24 09:36:04.898629403 +0000 UTC m=+0.032519672 container died 89e9e06f9211bc7e046f65662fa13ddb2cb8e39af97781a83cf33fe42fb7a133 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 09:36:04 compute-1 systemd[1]: var-lib-containers-storage-overlay-b08c191ca12641a1e68bb6e456ff7064fde48d688012042f510575c5e2cb0c34-merged.mount: Deactivated successfully.
Nov 24 09:36:04 compute-1 podman[123786]: 2025-11-24 09:36:04.946860902 +0000 UTC m=+0.080751161 container remove 89e9e06f9211bc7e046f65662fa13ddb2cb8e39af97781a83cf33fe42fb7a133 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid)
Nov 24 09:36:04 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:36:05 compute-1 python3.9[123782]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:05 compute-1 sudo[123780]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:05 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 09:36:05 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.576s CPU time.
Nov 24 09:36:05 compute-1 sudo[123904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teoxopgtaencjisvfhzapckhlegoymgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976964.5351443-309-177521094321827/AnsiballZ_file.py'
Nov 24 09:36:05 compute-1 sudo[123904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:05 compute-1 python3.9[123906]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ordhq9gz recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:05 compute-1 sudo[123904]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:06 compute-1 sudo[124057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scxxkmmrlvwvzzfndjumqcxqsmnopwhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976965.8217373-345-133939205410271/AnsiballZ_stat.py'
Nov 24 09:36:06 compute-1 sudo[124057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:06 compute-1 python3.9[124059]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:06 compute-1 sudo[124057]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:06.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:06 compute-1 ceph-mon[80009]: pgmap v266: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:36:06 compute-1 sudo[124135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmyxyfqtzqopuuvalqhhbebklfbylehw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976965.8217373-345-133939205410271/AnsiballZ_file.py'
Nov 24 09:36:06 compute-1 sudo[124135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:06.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:06 compute-1 python3.9[124137]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:06 compute-1 sudo[124135]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:07 compute-1 sudo[124287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txhmmfqpdsupiuqhhfvzwiwjiwfxwrfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976967.1351364-384-228924026320668/AnsiballZ_command.py'
Nov 24 09:36:07 compute-1 sudo[124287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:07 compute-1 python3.9[124289]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:36:07 compute-1 sudo[124287]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:07 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:36:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:36:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:08.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:36:08 compute-1 sudo[124441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inwdgwiqxddvfnvgcjrrbypetjaziyzd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763976968.0248332-408-102744759964760/AnsiballZ_edpm_nftables_from_files.py'
Nov 24 09:36:08 compute-1 sudo[124441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:08 compute-1 ceph-mon[80009]: pgmap v267: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:36:08 compute-1 python3[124443]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 09:36:08 compute-1 sudo[124441]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:08.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:09 compute-1 sudo[124593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doglnttavrfyliezsiinwodblfeijbca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976969.0022404-432-163909908000755/AnsiballZ_stat.py'
Nov 24 09:36:09 compute-1 sudo[124593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093609 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:36:09 compute-1 python3.9[124595]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:09 compute-1 sudo[124593]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:10 compute-1 sudo[124719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dunhpjwymmyloolbuddyfesblmkosbxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976969.0022404-432-163909908000755/AnsiballZ_copy.py'
Nov 24 09:36:10 compute-1 sudo[124719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:10 compute-1 python3.9[124721]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976969.0022404-432-163909908000755/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:10 compute-1 sudo[124719]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:10.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:10 compute-1 ceph-mon[80009]: pgmap v268: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:36:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:10.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:10 compute-1 sudo[124871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqtimknqngvdkdomthushrqcsvtpygtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976970.5721831-477-220323460297226/AnsiballZ_stat.py'
Nov 24 09:36:10 compute-1 sudo[124871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:11 compute-1 python3.9[124873]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:11 compute-1 sudo[124871]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:11 compute-1 sudo[124996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqakupbuatbpguyhyjpecscnncachxmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976970.5721831-477-220323460297226/AnsiballZ_copy.py'
Nov 24 09:36:11 compute-1 sudo[124996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:11 compute-1 python3.9[124998]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976970.5721831-477-220323460297226/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:11 compute-1 sudo[124996]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:12.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:12 compute-1 sudo[125149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpyxweuzqripapekcowyzzxqfkmgkego ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976972.146237-522-46863788175343/AnsiballZ_stat.py'
Nov 24 09:36:12 compute-1 sudo[125149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:12 compute-1 ceph-mon[80009]: pgmap v269: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:36:12 compute-1 python3.9[125151]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:12 compute-1 sudo[125149]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:12.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:36:13 compute-1 sudo[125274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfviogdutaegamvaerrppqxuwsgypptc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976972.146237-522-46863788175343/AnsiballZ_copy.py'
Nov 24 09:36:13 compute-1 sudo[125274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:13 compute-1 python3.9[125276]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976972.146237-522-46863788175343/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:13 compute-1 sudo[125274]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:13 compute-1 sudo[125426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzwrhnwmkulwfljkfaahhbvujqrlowrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976973.492249-567-138280405723606/AnsiballZ_stat.py'
Nov 24 09:36:13 compute-1 sudo[125426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:13 compute-1 python3.9[125428]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:14 compute-1 sudo[125426]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:14 compute-1 sudo[125552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luufhcarourrkgklikiyfdzannovossb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976973.492249-567-138280405723606/AnsiballZ_copy.py'
Nov 24 09:36:14 compute-1 sudo[125552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:14.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:14 compute-1 python3.9[125554]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976973.492249-567-138280405723606/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:14 compute-1 sudo[125552]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:14 compute-1 ceph-mon[80009]: pgmap v270: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:36:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:36:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:14.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:36:15 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 3.
Nov 24 09:36:15 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:36:15 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.576s CPU time.
Nov 24 09:36:15 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:36:15 compute-1 sudo[125735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srbcnqybfujflpxwgjixiqhrdmjveack ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976975.0323694-612-222204463644646/AnsiballZ_stat.py'
Nov 24 09:36:15 compute-1 sudo[125735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:36:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:36:15 compute-1 podman[125753]: 2025-11-24 09:36:15.476946809 +0000 UTC m=+0.045802440 container create 6cabd88b91f4fd1437b7ff52ddca5cb05345de4c636d0778a8357b125db16eaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:36:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e32851b5a2e93b9365e0cf37abccf059ff991f45f064509199c8a8139824910b/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:36:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e32851b5a2e93b9365e0cf37abccf059ff991f45f064509199c8a8139824910b/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:36:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e32851b5a2e93b9365e0cf37abccf059ff991f45f064509199c8a8139824910b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:36:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e32851b5a2e93b9365e0cf37abccf059ff991f45f064509199c8a8139824910b/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:36:15 compute-1 podman[125753]: 2025-11-24 09:36:15.53821913 +0000 UTC m=+0.107074761 container init 6cabd88b91f4fd1437b7ff52ddca5cb05345de4c636d0778a8357b125db16eaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:36:15 compute-1 podman[125753]: 2025-11-24 09:36:15.543047827 +0000 UTC m=+0.111903458 container start 6cabd88b91f4fd1437b7ff52ddca5cb05345de4c636d0778a8357b125db16eaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Nov 24 09:36:15 compute-1 bash[125753]: 6cabd88b91f4fd1437b7ff52ddca5cb05345de4c636d0778a8357b125db16eaf
Nov 24 09:36:15 compute-1 podman[125753]: 2025-11-24 09:36:15.454436164 +0000 UTC m=+0.023291825 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:36:15 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:36:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:15 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:36:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:15 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:36:15 compute-1 python3.9[125743]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:15 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:36:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:15 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:36:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:15 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:36:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:15 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:36:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:15 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:36:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:15 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:36:15 compute-1 sudo[125735]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:36:15 compute-1 sudo[125933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynkkeukujjovoeqvptdnyngnuksekxqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976975.0323694-612-222204463644646/AnsiballZ_copy.py'
Nov 24 09:36:15 compute-1 sudo[125933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:16 compute-1 python3.9[125935]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976975.0323694-612-222204463644646/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:16 compute-1 sudo[125933]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:36:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:16.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:36:16 compute-1 ceph-mon[80009]: pgmap v271: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:36:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:16.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:16 compute-1 sudo[126085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjnotfhutossgddubemygiitorahfqqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976976.629841-657-62011011997826/AnsiballZ_file.py'
Nov 24 09:36:16 compute-1 sudo[126085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:17 compute-1 python3.9[126087]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:17 compute-1 sudo[126085]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:17 compute-1 sudo[126237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syhhzdwcfnynzmhjggaotszeymbkajnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976977.5045083-681-132844027080563/AnsiballZ_command.py'
Nov 24 09:36:17 compute-1 sudo[126237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:17 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:36:17 compute-1 python3.9[126239]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:36:18 compute-1 sudo[126237]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:18.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:18 compute-1 ceph-mon[80009]: pgmap v272: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:36:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:18.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:18 compute-1 sudo[126393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kumdjercjajylnijbvqktcykcojxuefg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976978.4436505-705-124955557995219/AnsiballZ_blockinfile.py'
Nov 24 09:36:18 compute-1 sudo[126393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:19 compute-1 python3.9[126395]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:19 compute-1 sudo[126393]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:19 compute-1 sudo[126545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gumiboehjjlzklsfmslwiaymloppejsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976979.5031285-732-93242319719431/AnsiballZ_command.py'
Nov 24 09:36:19 compute-1 sudo[126545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:19 compute-1 python3.9[126547]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:36:19 compute-1 sudo[126545]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:36:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:20.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:36:20 compute-1 sudo[126653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:36:20 compute-1 sudo[126653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:36:20 compute-1 sudo[126653]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:20 compute-1 ceph-mon[80009]: pgmap v273: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:36:20 compute-1 sudo[126724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twxezctkxjraoiqykpusmgliepicpizt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976980.432158-757-75043939522781/AnsiballZ_stat.py'
Nov 24 09:36:20 compute-1 sudo[126724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:36:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:20.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:36:20 compute-1 python3.9[126726]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:36:20 compute-1 sudo[126724]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:21 compute-1 sudo[126878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuvsojwbugztkutjnkjnefkbhsjybkdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976981.2903345-780-72909572137333/AnsiballZ_command.py'
Nov 24 09:36:21 compute-1 sudo[126878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:21 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:36:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:21 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:36:21 compute-1 python3.9[126880]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:36:21 compute-1 sudo[126878]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:22.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:22 compute-1 sudo[127034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnyerfiftxssiohyghfhaytnwzuvmxjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976982.1436715-804-9851473352325/AnsiballZ_file.py'
Nov 24 09:36:22 compute-1 sudo[127034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:22 compute-1 ceph-mon[80009]: pgmap v274: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Nov 24 09:36:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:22.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:22 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:36:22 compute-1 python3.9[127036]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:22 compute-1 sudo[127034]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:24 compute-1 python3.9[127187]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:36:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:36:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:24.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:36:24 compute-1 ceph-mon[80009]: pgmap v275: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:36:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:36:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:24.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:36:25 compute-1 sudo[127338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxbrcczbrekfbnysjtebjtvogsdsscnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976984.9889088-924-138978410229940/AnsiballZ_command.py'
Nov 24 09:36:25 compute-1 sudo[127338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:25 compute-1 python3.9[127340]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:93:45:69:49" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:36:25 compute-1 ovs-vsctl[127341]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:93:45:69:49 external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 24 09:36:25 compute-1 sudo[127338]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:26 compute-1 sudo[127492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnnwlzmanscoignwpmuaficiqzhopdur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976985.9155076-951-69056861153562/AnsiballZ_command.py'
Nov 24 09:36:26 compute-1 sudo[127492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:26 compute-1 python3.9[127494]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:36:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:26.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:26 compute-1 sudo[127492]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:26 compute-1 ceph-mon[80009]: pgmap v276: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:36:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:36:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:26.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:36:27 compute-1 sudo[127647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjlnevtzzciizpbwqjnxhafzkgtjfiva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976986.9063604-975-242039273749596/AnsiballZ_command.py'
Nov 24 09:36:27 compute-1 sudo[127647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:27 compute-1 python3.9[127649]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:36:27 compute-1 ovs-vsctl[127650]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 24 09:36:27 compute-1 sudo[127647]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:36:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:36:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:28 compute-1 python3.9[127812]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:36:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:28.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:28 compute-1 sudo[127968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkawyxdnzjpmeoehrhvmaujbdmcdncng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976988.4010253-1026-68810382139798/AnsiballZ_file.py'
Nov 24 09:36:28 compute-1 sudo[127968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:28.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:28 compute-1 ceph-mon[80009]: pgmap v277: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:36:28 compute-1 python3.9[127970]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:36:28 compute-1 sudo[127968]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:29 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:29 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:29 compute-1 sudo[128120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbbsszfkdcccdqjpkpwjjxhuvoqmympj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976989.2328103-1050-169763210817626/AnsiballZ_stat.py'
Nov 24 09:36:29 compute-1 sudo[128120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:29 compute-1 python3.9[128122]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:29 compute-1 sudo[128120]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:29 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f8000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:29 compute-1 sudo[128199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpsvhqzerbpgutxkzaavigjbdxuopthy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976989.2328103-1050-169763210817626/AnsiballZ_file.py'
Nov 24 09:36:29 compute-1 sudo[128199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:30 compute-1 python3.9[128201]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:36:30 compute-1 sudo[128199]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:36:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:36:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:36:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:30.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:36:30 compute-1 sudo[128351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zogenfdvmnhxaabjmbdygctcqiyizadz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976990.336502-1050-150777189677561/AnsiballZ_stat.py'
Nov 24 09:36:30 compute-1 sudo[128351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:30 compute-1 python3.9[128353]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.002000047s ======
Nov 24 09:36:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:30.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Nov 24 09:36:30 compute-1 sudo[128351]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:30 compute-1 ceph-mon[80009]: pgmap v278: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:36:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:36:31 compute-1 sudo[128429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndqjpbbjncowrdlielfbwhvpkeaymkva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976990.336502-1050-150777189677561/AnsiballZ_file.py'
Nov 24 09:36:31 compute-1 sudo[128429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:31 compute-1 python3.9[128431]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:36:31 compute-1 sudo[128429]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:31 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f8000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093631 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:36:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:31 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:31 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:32 compute-1 sudo[128582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hipzvqtmfiwcllyofxcxwjdycakbxgnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976991.9866505-1119-218756676313116/AnsiballZ_file.py'
Nov 24 09:36:32 compute-1 sudo[128582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:36:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:32.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:36:32 compute-1 python3.9[128584]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:32 compute-1 sudo[128582]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:32.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:32 compute-1 ceph-mon[80009]: pgmap v279: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:36:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:36:33 compute-1 sudo[128734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktdqnsgyslccjnmztctkctcamvmkzvbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976992.8156905-1143-111708956217531/AnsiballZ_stat.py'
Nov 24 09:36:33 compute-1 sudo[128734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:33 compute-1 python3.9[128736]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:33 compute-1 sudo[128734]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:33 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f8000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:33 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f8000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:33 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 09:36:33 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 6424 writes, 26K keys, 6424 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6424 writes, 1124 syncs, 5.72 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6424 writes, 26K keys, 6424 commit groups, 1.0 writes per commit group, ingest: 19.84 MB, 0.03 MB/s
                                           Interval WAL: 6424 writes, 1124 syncs, 5.72 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 09:36:33 compute-1 sudo[128812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oymogyqacuvzduopbxzeaecwcyicjiwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976992.8156905-1143-111708956217531/AnsiballZ_file.py'
Nov 24 09:36:33 compute-1 sudo[128812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:33 compute-1 python3.9[128814]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:33 compute-1 sudo[128812]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:33 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:34.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:34 compute-1 sudo[128965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iejomnahcvwffvjlosafcozdfglwpdig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976994.1547508-1179-79425795870938/AnsiballZ_stat.py'
Nov 24 09:36:34 compute-1 sudo[128965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:34 compute-1 python3.9[128967]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:34 compute-1 sudo[128965]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:36:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:34.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:36:34 compute-1 ceph-mon[80009]: pgmap v280: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:36:34 compute-1 sudo[129043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmxdhvrznnaskaaebsezpddussxwokky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976994.1547508-1179-79425795870938/AnsiballZ_file.py'
Nov 24 09:36:34 compute-1 sudo[129043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:35 compute-1 python3.9[129045]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:35 compute-1 sudo[129043]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:35 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:35 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f8000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:35 compute-1 sudo[129196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brvgltshkghuhkieozcafjnbermrdxis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976995.5406625-1215-58753476367378/AnsiballZ_systemd.py'
Nov 24 09:36:35 compute-1 sudo[129196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:35 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0001380 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:36 compute-1 python3.9[129198]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:36:36 compute-1 systemd[1]: Reloading.
Nov 24 09:36:36 compute-1 systemd-sysv-generator[129229]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:36:36 compute-1 systemd-rc-local-generator[129226]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:36:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:36.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:36 compute-1 sudo[129196]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:36.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:36 compute-1 ceph-mon[80009]: pgmap v281: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:36:37 compute-1 sudo[129385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlkjsqtzizmrrqslzwqiifskxcomhwvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976996.9059994-1239-237545242199739/AnsiballZ_stat.py'
Nov 24 09:36:37 compute-1 sudo[129385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:37 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:37 compute-1 python3.9[129387]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:37 compute-1 sudo[129385]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:37 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:37 compute-1 sudo[129463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpskougcenrgqbxseeupittrfnpxvsmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976996.9059994-1239-237545242199739/AnsiballZ_file.py'
Nov 24 09:36:37 compute-1 sudo[129463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:37 compute-1 python3.9[129465]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:37 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:36:37 compute-1 sudo[129463]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:37 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f8000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:36:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:38.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:36:38 compute-1 sudo[129616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuxphkjzdyixyotkmggthndnndxampws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976998.3184307-1275-273749515514584/AnsiballZ_stat.py'
Nov 24 09:36:38 compute-1 sudo[129616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:38 compute-1 python3.9[129618]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:38.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:38 compute-1 sudo[129616]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:38 compute-1 ceph-mon[80009]: pgmap v282: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:36:39 compute-1 sudo[129694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vleimgfwiamezkvozxvgrcyhiquydwdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976998.3184307-1275-273749515514584/AnsiballZ_file.py'
Nov 24 09:36:39 compute-1 sudo[129694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:39 compute-1 python3.9[129696]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:39 compute-1 sudo[129694]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:39 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0001ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:39 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:39 compute-1 sudo[129846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eenlufnnnwxmcrjzuamqkbqtdhyartyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976999.5563588-1311-58328740845072/AnsiballZ_systemd.py'
Nov 24 09:36:39 compute-1 sudo[129846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:39 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:40 compute-1 python3.9[129848]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:36:40 compute-1 systemd[1]: Reloading.
Nov 24 09:36:40 compute-1 systemd-rc-local-generator[129875]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:36:40 compute-1 systemd-sysv-generator[129878]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:36:40 compute-1 systemd[1]: Starting Create netns directory...
Nov 24 09:36:40 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 09:36:40 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 09:36:40 compute-1 systemd[1]: Finished Create netns directory.
Nov 24 09:36:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:36:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:40.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:36:40 compute-1 sudo[129846]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:40 compute-1 sudo[129915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:36:40 compute-1 sudo[129915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:36:40 compute-1 sudo[129915]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:40.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:40 compute-1 ceph-mon[80009]: pgmap v283: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:36:41 compute-1 sudo[130065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fceifvqkwkdziaemxgmytoialwiorqzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977001.0019958-1341-259047870071103/AnsiballZ_file.py'
Nov 24 09:36:41 compute-1 sudo[130065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:41 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f8003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:41 compute-1 python3.9[130067]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:36:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:41 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0001ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:41 compute-1 sudo[130065]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:41 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec002160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:42 compute-1 sudo[130218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvtuqisifhxzdtftzusmdivdoldqlwam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977001.8409321-1365-213441516940144/AnsiballZ_stat.py'
Nov 24 09:36:42 compute-1 sudo[130218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:42 compute-1 python3.9[130220]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:42 compute-1 sudo[130218]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:36:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:42.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:36:42 compute-1 sudo[130341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pavuivwvbfjppdfgkhdzjdgduohwsdzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977001.8409321-1365-213441516940144/AnsiballZ_copy.py'
Nov 24 09:36:42 compute-1 sudo[130341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:36:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:42.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:36:42 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:36:42 compute-1 python3.9[130343]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977001.8409321-1365-213441516940144/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:36:42 compute-1 ceph-mon[80009]: pgmap v284: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:36:42 compute-1 sudo[130341]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:43 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:43 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f8003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:43 compute-1 sudo[130494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loxrwamrulcwwwaudqbjieveoqtzienu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977003.6410146-1416-49732887402900/AnsiballZ_file.py'
Nov 24 09:36:43 compute-1 sudo[130494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:43 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0001ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:44 compute-1 python3.9[130496]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:36:44 compute-1 sudo[130494]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093644 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:36:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:36:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:44.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:36:44 compute-1 sudo[130646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsepfdkisdxkdpqajvawuiqjkphcdtey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977004.4070644-1440-234980861693327/AnsiballZ_stat.py'
Nov 24 09:36:44 compute-1 sudo[130646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:44.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:44 compute-1 python3.9[130648]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:44 compute-1 sudo[130646]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:44 compute-1 ceph-mon[80009]: pgmap v285: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:36:45 compute-1 sudo[130769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddsuhwleskhpsychvcckauwijhnagghv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977004.4070644-1440-234980861693327/AnsiballZ_copy.py'
Nov 24 09:36:45 compute-1 sudo[130769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:45 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:36:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:36:45 compute-1 python3.9[130771]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977004.4070644-1440-234980861693327/.source.json _original_basename=.bik360rx follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:45 compute-1 sudo[130769]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:45 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:45 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:36:46 compute-1 sudo[130922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paiglikuoiirgaqhcsygxnumygjbxnkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977005.791668-1485-177544331174267/AnsiballZ_file.py'
Nov 24 09:36:46 compute-1 sudo[130922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:46 compute-1 python3.9[130924]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:46 compute-1 sudo[130922]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:36:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:46.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:36:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:46.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:47 compute-1 sudo[131074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltlezugqbvyaatfgfycpdzydoilheuke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977006.7648716-1509-113299746263698/AnsiballZ_stat.py'
Nov 24 09:36:47 compute-1 sudo[131074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:47 compute-1 ceph-mon[80009]: pgmap v286: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:36:47 compute-1 sudo[131074]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:47 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:47 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:47 compute-1 sudo[131197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aizrphhagrwrnccuwsxlbxjvcbnsonen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977006.7648716-1509-113299746263698/AnsiballZ_copy.py'
Nov 24 09:36:47 compute-1 sudo[131197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:47 compute-1 sudo[131197]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:47 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:36:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:47 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:48.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:48 compute-1 sudo[131350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyxcjszlywjzqckcruowxkfjgykqnkgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977008.4336956-1560-279256879633123/AnsiballZ_container_config_data.py'
Nov 24 09:36:48 compute-1 sudo[131350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:48.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:49 compute-1 python3.9[131352]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 24 09:36:49 compute-1 sudo[131350]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:49 compute-1 ceph-mon[80009]: pgmap v287: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:36:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:49 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:49 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:49 compute-1 sudo[131503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvbmvwqfmzduhvchfoeoqfilfygerhkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977009.4359324-1587-120355114034395/AnsiballZ_container_config_hash.py'
Nov 24 09:36:49 compute-1 sudo[131503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:49 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:50 compute-1 python3.9[131505]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 09:36:50 compute-1 sudo[131503]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:36:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:50.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:36:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:50.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:51 compute-1 sudo[131655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soocyffacldvxapxeqywljgojpznfvah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977010.5239687-1614-15281985111508/AnsiballZ_podman_container_info.py'
Nov 24 09:36:51 compute-1 sudo[131655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:51 compute-1 ceph-mon[80009]: pgmap v288: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:36:51 compute-1 python3.9[131657]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 24 09:36:51 compute-1 sudo[131655]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:51 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:51 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:51 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:52.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:52.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:36:52 compute-1 sudo[131836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajzodxzkepaqdyvzbmocjtmxkpuzeldw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763977012.4679716-1653-63556751181245/AnsiballZ_edpm_container_manage.py'
Nov 24 09:36:52 compute-1 sudo[131836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:53 compute-1 ceph-mon[80009]: pgmap v289: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:36:53 compute-1 python3[131838]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 09:36:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:53 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:53 : epoch 6924270f : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:36:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:53 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:53 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:54 compute-1 ceph-mon[80009]: pgmap v290: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:36:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:54.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:54.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:55 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:55 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:55 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:56 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:56 : epoch 6924270f : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:36:56 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:56 : epoch 6924270f : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:36:56 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:56 : epoch 6924270f : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:36:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:36:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:56.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:36:56 compute-1 ceph-mon[80009]: pgmap v291: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:36:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:36:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:56.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:36:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:57 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:57 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:57 compute-1 sudo[131919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:36:57 compute-1 sudo[131919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:36:57 compute-1 sudo[131919]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:36:57 compute-1 sudo[131944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Nov 24 09:36:57 compute-1 sudo[131944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:36:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:57 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:36:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:58.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:36:58 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:36:58 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:36:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:36:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:58.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:58 compute-1 ceph-mon[80009]: pgmap v292: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Nov 24 09:36:58 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:36:58 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:36:59 compute-1 podman[131852]: 2025-11-24 09:36:59.234155805 +0000 UTC m=+5.911098548 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 24 09:36:59 compute-1 sudo[131944]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:36:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:36:59 compute-1 sudo[132041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:36:59 compute-1 sudo[132041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:36:59 compute-1 sudo[132041]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:59 compute-1 podman[132042]: 2025-11-24 09:36:59.376266141 +0000 UTC m=+0.050338088 container create c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 09:36:59 compute-1 podman[132042]: 2025-11-24 09:36:59.350323815 +0000 UTC m=+0.024395762 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 24 09:36:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:59 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:59 compute-1 python3[131838]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 24 09:36:59 compute-1 sudo[132093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:36:59 compute-1 sudo[132093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:36:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:59 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:59 : epoch 6924270f : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:36:59 compute-1 sudo[131836]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:59 compute-1 sudo[132307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpnmngizqxepvgezftdetnvteqjsdxjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977019.675318-1677-232798886157134/AnsiballZ_stat.py'
Nov 24 09:36:59 compute-1 sudo[132307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:59 compute-1 sudo[132093]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:59 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:36:59 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:37:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:37:00 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:37:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:37:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:37:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:37:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:37:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:37:00 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:37:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:37:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:37:00 compute-1 python3.9[132313]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:37:00 compute-1 sudo[132307]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:00 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:37:00 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:37:00 compute-1 ceph-mon[80009]: pgmap v293: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:37:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:37:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:37:00 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:37:00 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:37:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:37:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:37:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:37:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:37:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:37:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:00.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:00 compute-1 sudo[132440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:37:00 compute-1 sudo[132488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qntxtgbsreaekabybhzwieuctetpprdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977020.623293-1704-125830249335621/AnsiballZ_file.py'
Nov 24 09:37:00 compute-1 sudo[132440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:37:00 compute-1 sudo[132488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:00 compute-1 sudo[132440]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:00.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:01 compute-1 python3.9[132492]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:37:01 compute-1 sudo[132488]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:01 compute-1 sudo[132566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itgtyvblmafyrcnnuvyyxlytaervpjre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977020.623293-1704-125830249335621/AnsiballZ_stat.py'
Nov 24 09:37:01 compute-1 sudo[132566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:37:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:01 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:01 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4404001050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:01 compute-1 python3.9[132568]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:37:01 compute-1 sudo[132566]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:01 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:02 compute-1 sudo[132718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edbfxkqchpyumdvoepcnjwmgqhaqrwtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977021.5781636-1704-104276392668209/AnsiballZ_copy.py'
Nov 24 09:37:02 compute-1 sudo[132718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:02 compute-1 python3.9[132720]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763977021.5781636-1704-104276392668209/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:37:02 compute-1 sudo[132718]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:02 compute-1 ceph-mon[80009]: pgmap v294: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Nov 24 09:37:02 compute-1 sudo[132794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkfnbmuvbmuicloeiewztjizbfgzodgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977021.5781636-1704-104276392668209/AnsiballZ_systemd.py'
Nov 24 09:37:02 compute-1 sudo[132794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:02.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:02 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 09:37:02 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2467 writes, 14K keys, 2467 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.06 MB/s
                                           Cumulative WAL: 2467 writes, 2467 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2467 writes, 14K keys, 2467 commit groups, 1.0 writes per commit group, ingest: 38.73 MB, 0.06 MB/s
                                           Interval WAL: 2467 writes, 2467 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    149.3      0.14              0.04         6    0.023       0      0       0.0       0.0
                                             L6      1/0   12.34 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.0    131.9    115.8      0.54              0.13         5    0.107     21K   2256       0.0       0.0
                                            Sum      1/0   12.34 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   4.0    104.5    122.7      0.68              0.18        11    0.062     21K   2256       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   4.0    104.8    123.0      0.68              0.18        10    0.068     21K   2256       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0    131.9    115.8      0.54              0.13         5    0.107     21K   2256       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    151.2      0.14              0.04         5    0.028       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.021, interval 0.021
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.7 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a5fe7f5350#2 capacity: 304.00 MB usage: 2.21 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(159,2.00 MB,0.659315%) FilterBlock(11,69.80 KB,0.0224214%) IndexBlock(11,139.67 KB,0.0448679%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 09:37:02 compute-1 python3.9[132796]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 09:37:02 compute-1 systemd[1]: Reloading.
Nov 24 09:37:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:37:02 compute-1 systemd-sysv-generator[132825]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:37:02 compute-1 systemd-rc-local-generator[132822]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:37:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:02.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:03 compute-1 sudo[132794]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:03 compute-1 sudo[132905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nugypzfzeholnipgxyulxuahbvagxrer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977021.5781636-1704-104276392668209/AnsiballZ_systemd.py'
Nov 24 09:37:03 compute-1 sudo[132905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:03 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:03 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:03 compute-1 python3.9[132907]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:37:03 compute-1 systemd[1]: Reloading.
Nov 24 09:37:03 compute-1 systemd-rc-local-generator[132938]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:37:03 compute-1 systemd-sysv-generator[132941]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:37:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:03 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4404001b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:04 compute-1 systemd[1]: Starting ovn_controller container...
Nov 24 09:37:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:04.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:04 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:37:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45fde3f3c15c720ba18b3ff150c91bff40d0b825e6318befc8f6c0df3d9f4c75/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 24 09:37:04 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2.
Nov 24 09:37:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:04.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:04 compute-1 podman[132950]: 2025-11-24 09:37:04.903522754 +0000 UTC m=+0.832150450 container init c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 09:37:04 compute-1 ovn_controller[132966]: + sudo -E kolla_set_configs
Nov 24 09:37:04 compute-1 podman[132950]: 2025-11-24 09:37:04.927853252 +0000 UTC m=+0.856480948 container start c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 24 09:37:04 compute-1 edpm-start-podman-container[132950]: ovn_controller
Nov 24 09:37:04 compute-1 systemd[1]: Created slice User Slice of UID 0.
Nov 24 09:37:04 compute-1 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 24 09:37:04 compute-1 ceph-mon[80009]: pgmap v295: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:37:04 compute-1 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 24 09:37:04 compute-1 systemd[1]: Starting User Manager for UID 0...
Nov 24 09:37:05 compute-1 systemd[133000]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Nov 24 09:37:05 compute-1 edpm-start-podman-container[132949]: Creating additional drop-in dependency for "ovn_controller" (c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2)
Nov 24 09:37:05 compute-1 podman[132973]: 2025-11-24 09:37:05.021437495 +0000 UTC m=+0.080889657 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 09:37:05 compute-1 systemd[1]: c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2-479cdedca759de6a.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 09:37:05 compute-1 systemd[1]: c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2-479cdedca759de6a.service: Failed with result 'exit-code'.
Nov 24 09:37:05 compute-1 systemd[1]: Reloading.
Nov 24 09:37:05 compute-1 systemd-rc-local-generator[133054]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:37:05 compute-1 systemd-sysv-generator[133059]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:37:05 compute-1 systemd[133000]: Queued start job for default target Main User Target.
Nov 24 09:37:05 compute-1 systemd[133000]: Created slice User Application Slice.
Nov 24 09:37:05 compute-1 systemd[133000]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 24 09:37:05 compute-1 systemd[133000]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 09:37:05 compute-1 systemd[133000]: Reached target Paths.
Nov 24 09:37:05 compute-1 systemd[133000]: Reached target Timers.
Nov 24 09:37:05 compute-1 systemd[133000]: Starting D-Bus User Message Bus Socket...
Nov 24 09:37:05 compute-1 systemd[133000]: Starting Create User's Volatile Files and Directories...
Nov 24 09:37:05 compute-1 systemd[133000]: Finished Create User's Volatile Files and Directories.
Nov 24 09:37:05 compute-1 systemd[133000]: Listening on D-Bus User Message Bus Socket.
Nov 24 09:37:05 compute-1 systemd[133000]: Reached target Sockets.
Nov 24 09:37:05 compute-1 systemd[133000]: Reached target Basic System.
Nov 24 09:37:05 compute-1 systemd[133000]: Reached target Main User Target.
Nov 24 09:37:05 compute-1 systemd[133000]: Startup finished in 158ms.
Nov 24 09:37:05 compute-1 systemd[1]: Started User Manager for UID 0.
Nov 24 09:37:05 compute-1 systemd[1]: Started ovn_controller container.
Nov 24 09:37:05 compute-1 systemd[1]: Started Session c1 of User root.
Nov 24 09:37:05 compute-1 sudo[132905]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:05 compute-1 ovn_controller[132966]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 09:37:05 compute-1 ovn_controller[132966]: INFO:__main__:Validating config file
Nov 24 09:37:05 compute-1 ovn_controller[132966]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 09:37:05 compute-1 ovn_controller[132966]: INFO:__main__:Writing out command to execute
Nov 24 09:37:05 compute-1 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 24 09:37:05 compute-1 ovn_controller[132966]: ++ cat /run_command
Nov 24 09:37:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:05 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:05 compute-1 ovn_controller[132966]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 24 09:37:05 compute-1 ovn_controller[132966]: + ARGS=
Nov 24 09:37:05 compute-1 ovn_controller[132966]: + sudo kolla_copy_cacerts
Nov 24 09:37:05 compute-1 systemd[1]: Started Session c2 of User root.
Nov 24 09:37:05 compute-1 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 24 09:37:05 compute-1 ovn_controller[132966]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 24 09:37:05 compute-1 ovn_controller[132966]: + [[ ! -n '' ]]
Nov 24 09:37:05 compute-1 ovn_controller[132966]: + . kolla_extend_start
Nov 24 09:37:05 compute-1 ovn_controller[132966]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 24 09:37:05 compute-1 ovn_controller[132966]: + umask 0022
Nov 24 09:37:05 compute-1 ovn_controller[132966]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 24 09:37:05 compute-1 NetworkManager[48870]: <info>  [1763977025.4495] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 24 09:37:05 compute-1 NetworkManager[48870]: <info>  [1763977025.4505] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:37:05 compute-1 NetworkManager[48870]: <info>  [1763977025.4520] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 24 09:37:05 compute-1 NetworkManager[48870]: <info>  [1763977025.4525] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 24 09:37:05 compute-1 NetworkManager[48870]: <info>  [1763977025.4530] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 24 09:37:05 compute-1 kernel: br-int: entered promiscuous mode
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00018|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00022|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00023|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00024|main|INFO|OVS feature set changed, force recompute.
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00001|statctrl(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00002|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 09:37:05 compute-1 NetworkManager[48870]: <info>  [1763977025.4726] manager: (ovn-f6640d-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 24 09:37:05 compute-1 systemd-udevd[133102]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 09:37:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:05 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43e4000ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:05 compute-1 kernel: genev_sys_6081: entered promiscuous mode
Nov 24 09:37:05 compute-1 NetworkManager[48870]: <info>  [1763977025.5066] device (genev_sys_6081): carrier: link connected
Nov 24 09:37:05 compute-1 NetworkManager[48870]: <info>  [1763977025.5070] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 09:37:05 compute-1 ovn_controller[132966]: 2025-11-24T09:37:05Z|00003|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 09:37:05 compute-1 NetworkManager[48870]: <info>  [1763977025.6515] manager: (ovn-fae732-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Nov 24 09:37:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:05 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:06 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093706 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:37:06 compute-1 NetworkManager[48870]: <info>  [1763977026.4541] manager: (ovn-feb242-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Nov 24 09:37:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:06.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:06 compute-1 sudo[133232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiuzkaacizqgzdabqleprhxrmpkpgdah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977026.5164797-1788-173548182398342/AnsiballZ_command.py'
Nov 24 09:37:06 compute-1 sudo[133232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:06 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:37:06 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:37:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:06.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:06 compute-1 ceph-mon[80009]: pgmap v296: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:37:06 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:37:06 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:37:07 compute-1 python3.9[133234]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:37:07 compute-1 ovs-vsctl[133258]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 24 09:37:07 compute-1 sudo[133235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:37:07 compute-1 sudo[133235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:37:07 compute-1 sudo[133235]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:07 compute-1 sudo[133232]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:07 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:07 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:07 compute-1 sudo[133410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psgbcevlfxqnmlhrucrlxzmubxinlppq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977027.2511222-1812-163685040322902/AnsiballZ_command.py'
Nov 24 09:37:07 compute-1 sudo[133410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:07 compute-1 python3.9[133412]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:37:07 compute-1 ovs-vsctl[133414]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 24 09:37:07 compute-1 sudo[133410]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:07 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:37:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:07 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43e4001a00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:08.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:08.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:09 compute-1 ceph-mon[80009]: pgmap v297: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:37:09 compute-1 sudo[133566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qugdedlljowtyjiuuzdnqziaivocifei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977028.7713797-1854-99246265871389/AnsiballZ_command.py'
Nov 24 09:37:09 compute-1 sudo[133566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:09 compute-1 python3.9[133568]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:37:09 compute-1 ovs-vsctl[133569]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 24 09:37:09 compute-1 sudo[133566]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:09 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43e4001a00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:09 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4404002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:09 compute-1 sshd-session[121798]: Connection closed by 192.168.122.30 port 36428
Nov 24 09:37:09 compute-1 sshd-session[121795]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:37:09 compute-1 systemd[1]: session-49.scope: Deactivated successfully.
Nov 24 09:37:09 compute-1 systemd[1]: session-49.scope: Consumed 57.703s CPU time.
Nov 24 09:37:09 compute-1 systemd-logind[823]: Session 49 logged out. Waiting for processes to exit.
Nov 24 09:37:09 compute-1 systemd-logind[823]: Removed session 49.
Nov 24 09:37:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:09 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:10.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:37:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:10.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:37:11 compute-1 ceph-mon[80009]: pgmap v298: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:37:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:11 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:11 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:11 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4404002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:37:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:12.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:37:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:37:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:12.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:13 compute-1 ceph-mon[80009]: pgmap v299: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:37:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:13 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:13 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:13 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43e4001a00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:14.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:14.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:15 compute-1 ceph-mon[80009]: pgmap v300: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:37:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:15 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43e4001a00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:37:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:37:15 compute-1 systemd[1]: Stopping User Manager for UID 0...
Nov 24 09:37:15 compute-1 systemd[133000]: Activating special unit Exit the Session...
Nov 24 09:37:15 compute-1 systemd[133000]: Stopped target Main User Target.
Nov 24 09:37:15 compute-1 systemd[133000]: Stopped target Basic System.
Nov 24 09:37:15 compute-1 systemd[133000]: Stopped target Paths.
Nov 24 09:37:15 compute-1 systemd[133000]: Stopped target Sockets.
Nov 24 09:37:15 compute-1 systemd[133000]: Stopped target Timers.
Nov 24 09:37:15 compute-1 systemd[133000]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 24 09:37:15 compute-1 systemd[133000]: Closed D-Bus User Message Bus Socket.
Nov 24 09:37:15 compute-1 systemd[133000]: Stopped Create User's Volatile Files and Directories.
Nov 24 09:37:15 compute-1 systemd[133000]: Removed slice User Application Slice.
Nov 24 09:37:15 compute-1 systemd[133000]: Reached target Shutdown.
Nov 24 09:37:15 compute-1 systemd[133000]: Finished Exit the Session.
Nov 24 09:37:15 compute-1 systemd[133000]: Reached target Exit the Session.
Nov 24 09:37:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:15 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:15 compute-1 systemd[1]: user@0.service: Deactivated successfully.
Nov 24 09:37:15 compute-1 systemd[1]: Stopped User Manager for UID 0.
Nov 24 09:37:15 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 24 09:37:15 compute-1 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 24 09:37:15 compute-1 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 24 09:37:15 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 24 09:37:15 compute-1 systemd[1]: Removed slice User Slice of UID 0.
Nov 24 09:37:15 compute-1 sshd-session[133599]: Accepted publickey for zuul from 192.168.122.30 port 47390 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:37:15 compute-1 systemd-logind[823]: New session 51 of user zuul.
Nov 24 09:37:15 compute-1 systemd[1]: Started Session 51 of User zuul.
Nov 24 09:37:15 compute-1 sshd-session[133599]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:37:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:15 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:37:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:16.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:16 compute-1 python3.9[133753]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:37:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:16.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:17 compute-1 ceph-mon[80009]: pgmap v301: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:37:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:17 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:17 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:17 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:37:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:17 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:18 compute-1 sudo[133908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnlgusrjrslvolhqtjrsyrbtepunelhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977037.4568803-63-188304549660090/AnsiballZ_file.py'
Nov 24 09:37:18 compute-1 sudo[133908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:18 compute-1 python3.9[133910]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:18 compute-1 sudo[133908]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:37:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:18.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:37:18 compute-1 sudo[134060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbadjregjypfsbbmoqatyrlzlntehens ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977038.4211483-63-216705050234974/AnsiballZ_file.py'
Nov 24 09:37:18 compute-1 sudo[134060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:18 compute-1 python3.9[134062]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:18 compute-1 sudo[134060]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:18.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:19 compute-1 ceph-mon[80009]: pgmap v302: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:37:19 compute-1 sudo[134212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylgnerqdcroouwjrwptaghuvgmxivyqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977039.0111473-63-252580214369522/AnsiballZ_file.py'
Nov 24 09:37:19 compute-1 sudo[134212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:19 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:19 compute-1 python3.9[134214]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:19 compute-1 sudo[134212]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:19 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:19 compute-1 sudo[134364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgzluqoywwaujgisfmghflgkhoipipyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977039.6085784-63-229893140122488/AnsiballZ_file.py'
Nov 24 09:37:19 compute-1 sudo[134364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:19 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43e4001a00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:20 compute-1 python3.9[134367]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:20 compute-1 sudo[134364]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:20 compute-1 sudo[134517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnlvlrtoudjmhpiaxadynoftwijhzhev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977040.212458-63-112801714627724/AnsiballZ_file.py'
Nov 24 09:37:20 compute-1 sudo[134517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:20.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:20 compute-1 python3.9[134519]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:20 compute-1 sudo[134517]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:37:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:20.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:37:20 compute-1 sudo[134544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:37:20 compute-1 sudo[134544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:37:20 compute-1 sudo[134544]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:21 compute-1 ceph-mon[80009]: pgmap v303: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:21 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:21 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:21 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040033d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:22 compute-1 python3.9[134694]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:37:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:37:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:22.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:37:22 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:37:22 compute-1 sudo[134845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dextezrakhpwnsowhlygeorcijzbpcqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977042.5347812-195-270801518259875/AnsiballZ_seboolean.py'
Nov 24 09:37:22 compute-1 sudo[134845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:22.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:23 compute-1 python3.9[134847]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 24 09:37:23 compute-1 ceph-mon[80009]: pgmap v304: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:37:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:23 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43e4003670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:23 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:23 compute-1 sudo[134845]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:24 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:37:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:24.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:37:24 compute-1 python3.9[134999]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:37:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:24.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:37:25 compute-1 ceph-mon[80009]: pgmap v305: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:25 compute-1 python3.9[135120]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977044.1042645-219-2988908468674/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:25 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040033d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:25 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43e4003670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:26 compute-1 python3.9[135271]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:26 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:26 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:26 compute-1 ceph-mon[80009]: pgmap v306: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:26 compute-1 python3.9[135392]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977045.588016-264-87182174839306/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:37:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:26.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:37:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:26.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040033d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:27 compute-1 sudo[135542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukzscnjgkbdtbxfywdqrowzfvdkimrnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977047.077518-315-100163033175130/AnsiballZ_setup.py'
Nov 24 09:37:27 compute-1 sudo[135542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:27 compute-1 python3.9[135544]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:37:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:37:28 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:28 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43e4003670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:28 compute-1 sudo[135542]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:28.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:28 compute-1 ceph-mon[80009]: pgmap v307: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:37:28 compute-1 sudo[135627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqkdcljihgmkiydbtybpsqkfmkqlydei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977047.077518-315-100163033175130/AnsiballZ_dnf.py'
Nov 24 09:37:28 compute-1 sudo[135627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:28 compute-1 python3.9[135629]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:37:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:28.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:29 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:29 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:30 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:30 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:37:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:37:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:37:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:30.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:37:30 compute-1 sudo[135627]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:30 compute-1 ceph-mon[80009]: pgmap v308: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:37:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:37:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:30.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:37:31 compute-1 sudo[135782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgbuqsjviguqeofootvgywjcohmmtips ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977050.7457829-351-85189468139540/AnsiballZ_systemd.py'
Nov 24 09:37:31 compute-1 sudo[135782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:31 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f00018b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:31 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:31 compute-1 python3.9[135784]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 09:37:31 compute-1 sudo[135782]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:32 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:32.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:32 compute-1 python3.9[135938]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:32 compute-1 ceph-mon[80009]: pgmap v309: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:37:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:37:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:32.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:33 compute-1 python3.9[136059]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977052.159136-375-144306091092847/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:33 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:33 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f00025d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:33 compute-1 python3.9[136209]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:34 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:34 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:34 compute-1 python3.9[136331]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977053.3069584-375-82454977258136/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:34.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:34 compute-1 ceph-mon[80009]: pgmap v310: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:34.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:35 compute-1 ovn_controller[132966]: 2025-11-24T09:37:35Z|00025|memory|INFO|17024 kB peak resident set size after 29.9 seconds
Nov 24 09:37:35 compute-1 ovn_controller[132966]: 2025-11-24T09:37:35Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Nov 24 09:37:35 compute-1 podman[136356]: 2025-11-24 09:37:35.355419969 +0000 UTC m=+0.087947820 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 09:37:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:35 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:35 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:35 compute-1 python3.9[136509]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:36 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:36 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f00025d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:36 compute-1 python3.9[136631]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977055.5285463-507-241546170037071/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:37:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:36.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:37:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:36.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:36 compute-1 ceph-mon[80009]: pgmap v311: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:37 compute-1 python3.9[136781]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:37 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:37 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f441400a470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:37 compute-1 python3.9[136902]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977056.6747687-507-274874583311384/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:37 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:37:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:38 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:38 compute-1 python3.9[137053]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:37:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:38.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:38.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:39 compute-1 ceph-mon[80009]: pgmap v312: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:37:39 compute-1 sudo[137205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfqdrfibvdczonwnuqaheqdjkeokboum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977059.0088933-621-208500239135952/AnsiballZ_file.py'
Nov 24 09:37:39 compute-1 sudo[137205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:39 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f00025d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:39 compute-1 python3.9[137207]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:39 compute-1 sudo[137205]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:39 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:40 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:40 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f441400a470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:40 compute-1 sudo[137358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yycgrrccqpplhllvvzqdzrpnfyrcfovr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977059.8179278-645-42891060988691/AnsiballZ_stat.py'
Nov 24 09:37:40 compute-1 sudo[137358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:40 compute-1 python3.9[137360]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:40 compute-1 sudo[137358]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:40 compute-1 sudo[137436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsboslwwnqygulwlkqgvnqdbdrfeblxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977059.8179278-645-42891060988691/AnsiballZ_file.py'
Nov 24 09:37:40 compute-1 sudo[137436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:40.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:40 compute-1 python3.9[137438]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:40 compute-1 sudo[137436]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:37:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:40.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:37:41 compute-1 ceph-mon[80009]: pgmap v313: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:41 compute-1 sudo[137538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:37:41 compute-1 sudo[137538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:37:41 compute-1 sudo[137538]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:41 compute-1 sudo[137613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvdukucetlgvskzsauyrkwwqcfzvkfmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977060.8566742-645-49645100678565/AnsiballZ_stat.py'
Nov 24 09:37:41 compute-1 sudo[137613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:41 compute-1 python3.9[137615]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:41 compute-1 sudo[137613]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:41 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:41 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:41 compute-1 sudo[137691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpzfjyyswhhgqcrcpkymgmfvswuibagi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977060.8566742-645-49645100678565/AnsiballZ_file.py'
Nov 24 09:37:41 compute-1 sudo[137691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:41 compute-1 python3.9[137693]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:41 compute-1 sudo[137691]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:42 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:42 compute-1 sudo[137844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abzwewiurpxfpenhcfsxzcoincissgje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977062.2555876-714-270207346045159/AnsiballZ_file.py'
Nov 24 09:37:42 compute-1 sudo[137844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:42.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:42 compute-1 python3.9[137846]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:37:42 compute-1 sudo[137844]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:42 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:37:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:42.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:43 compute-1 ceph-mon[80009]: pgmap v314: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:37:43 compute-1 sudo[137996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejqfkzyhomjdehrnkkfmanzhavoyhhwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977063.0520825-738-39834949823800/AnsiballZ_stat.py'
Nov 24 09:37:43 compute-1 sudo[137996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:43 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f441400a470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:43 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:43 compute-1 python3.9[137998]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:43 compute-1 sudo[137996]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:43 compute-1 sudo[138074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spgboiooznvigizytyhukdbmbzghhfom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977063.0520825-738-39834949823800/AnsiballZ_file.py'
Nov 24 09:37:43 compute-1 sudo[138074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:43 compute-1 python3.9[138076]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:37:43 compute-1 sudo[138074]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:44 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:37:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:44.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:37:44 compute-1 sudo[138227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovinlissebfbqsfvlpblknkamufbixig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977064.2599804-774-141436002537764/AnsiballZ_stat.py'
Nov 24 09:37:44 compute-1 sudo[138227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:44 compute-1 python3.9[138229]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:44 compute-1 sudo[138227]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:44.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:45 compute-1 sudo[138305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlasuochpsttudccfogrquoyqdwinqof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977064.2599804-774-141436002537764/AnsiballZ_file.py'
Nov 24 09:37:45 compute-1 sudo[138305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:45 compute-1 ceph-mon[80009]: pgmap v315: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:45 compute-1 python3.9[138307]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:37:45 compute-1 sudo[138305]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:37:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:37:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:45 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:45 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f441400a470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:46 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:37:46 compute-1 sudo[138458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmmhfkqbjeyqprytpjfzhqomjwjtgipb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977065.9415984-810-237964154593425/AnsiballZ_systemd.py'
Nov 24 09:37:46 compute-1 sudo[138458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:46 compute-1 python3.9[138460]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:37:46 compute-1 systemd[1]: Reloading.
Nov 24 09:37:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:37:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:46.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:37:46 compute-1 systemd-rc-local-generator[138488]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:37:46 compute-1 systemd-sysv-generator[138491]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:37:46 compute-1 sudo[138458]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:46.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:47 compute-1 ceph-mon[80009]: pgmap v316: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:47 compute-1 sudo[138648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpszmrljklxdzntwmvtpucqiowjfvbyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977067.1117914-834-151904534475765/AnsiballZ_stat.py'
Nov 24 09:37:47 compute-1 sudo[138648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:47 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:47 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:47 compute-1 python3.9[138650]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:47 compute-1 sudo[138648]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:47 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:37:47 compute-1 sudo[138727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asishmuhwvshgtgqcuudhidpsnjmrtsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977067.1117914-834-151904534475765/AnsiballZ_file.py'
Nov 24 09:37:47 compute-1 sudo[138727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:48 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f441400a470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:48 compute-1 python3.9[138729]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:37:48 compute-1 sudo[138727]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:48.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:48 compute-1 sudo[138879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhyslfxmqmkrsyugseynssnvkrvueqqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977068.4297953-870-149510111472376/AnsiballZ_stat.py'
Nov 24 09:37:48 compute-1 sudo[138879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:48 compute-1 python3.9[138881]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:48 compute-1 sudo[138879]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:37:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:48.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:37:49 compute-1 sudo[138957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhdfznfgexzyqnoxsfnkdjkfppwzyuvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977068.4297953-870-149510111472376/AnsiballZ_file.py'
Nov 24 09:37:49 compute-1 sudo[138957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:49 compute-1 ceph-mon[80009]: pgmap v317: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:37:49 compute-1 python3.9[138959]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:37:49 compute-1 sudo[138957]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:49 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:49 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:49 compute-1 sudo[139110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwlftzebjcupemijzmwswmqodogyddrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977069.6697156-906-150650707963705/AnsiballZ_systemd.py'
Nov 24 09:37:49 compute-1 sudo[139110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:50 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:50 compute-1 python3.9[139112]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:37:50 compute-1 systemd[1]: Reloading.
Nov 24 09:37:50 compute-1 systemd-rc-local-generator[139140]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:37:50 compute-1 systemd-sysv-generator[139143]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:37:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:50.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:50 compute-1 systemd[1]: Starting Create netns directory...
Nov 24 09:37:50 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 09:37:50 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 09:37:50 compute-1 systemd[1]: Finished Create netns directory.
Nov 24 09:37:50 compute-1 sudo[139110]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:50.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:51 compute-1 radosgw[81417]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 24 09:37:51 compute-1 radosgw[81417]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 24 09:37:51 compute-1 radosgw[81417]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 24 09:37:51 compute-1 ceph-mon[80009]: pgmap v318: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:51 compute-1 radosgw[81417]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 24 09:37:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:51 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f441400a470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:51 compute-1 sudo[139305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cadfmgimqpdwgeopkysvezsandgptgez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977071.2079325-936-89658894904307/AnsiballZ_file.py'
Nov 24 09:37:51 compute-1 sudo[139305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:51 compute-1 radosgw[81417]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 24 09:37:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:51 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:51 compute-1 python3.9[139307]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:51 compute-1 sudo[139305]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:52 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:52 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:52 compute-1 sudo[139459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocivyairfcrwznwdrpteacwzzcnjmasq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977071.9838781-960-232670460355376/AnsiballZ_stat.py'
Nov 24 09:37:52 compute-1 sudo[139459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:52 compute-1 python3.9[139461]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:52 compute-1 sudo[139459]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:52.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:52 compute-1 sudo[139582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwlsajcueixnzclnizrxjsuisjrwfcre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977071.9838781-960-232670460355376/AnsiballZ_copy.py'
Nov 24 09:37:52 compute-1 sudo[139582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:37:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:52.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:53 compute-1 python3.9[139584]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977071.9838781-960-232670460355376/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:53 compute-1 sudo[139582]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:53 compute-1 ceph-mon[80009]: pgmap v319: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 09:37:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:53 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:53 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:53 compute-1 sudo[139735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zorozenxhurkqdgkkbeehddcuzopjlwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977073.7084463-1011-108387806819789/AnsiballZ_file.py'
Nov 24 09:37:53 compute-1 sudo[139735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:54 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:54 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:54 compute-1 python3.9[139737]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:54 compute-1 sudo[139735]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:54.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:54 compute-1 sudo[139887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfubvhwysvemriabtyclhojfajnouatq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977074.4968886-1035-257021208689875/AnsiballZ_stat.py'
Nov 24 09:37:54 compute-1 sudo[139887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:54 compute-1 python3.9[139889]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:37:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:54.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:37:54 compute-1 sudo[139887]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:55 compute-1 ceph-mon[80009]: pgmap v320: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 09:37:55 compute-1 sudo[140010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghegaztojzratjcjzecrvnlwaduiskan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977074.4968886-1035-257021208689875/AnsiballZ_copy.py'
Nov 24 09:37:55 compute-1 sudo[140010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:55 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:55 compute-1 python3.9[140012]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977074.4968886-1035-257021208689875/.source.json _original_basename=.8qwzfune follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:37:55 compute-1 sudo[140010]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:55 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:56 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:56 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:56 compute-1 sudo[140163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emerhgkefsiupokiopmvxdqmthxdydsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977075.845596-1080-217758622468213/AnsiballZ_file.py'
Nov 24 09:37:56 compute-1 sudo[140163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:56 compute-1 python3.9[140165]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:37:56 compute-1 ceph-mon[80009]: pgmap v321: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 09:37:56 compute-1 sudo[140163]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:37:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:56.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:37:56 compute-1 sudo[140315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzjeucgfnzsoyhcpeheacrsmrgqczsaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977076.7434897-1104-59989965322906/AnsiballZ_stat.py'
Nov 24 09:37:56 compute-1 sudo[140315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:56.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:57 compute-1 sudo[140315]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:57 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003530 fd 38 proxy ignored for local
Nov 24 09:37:57 compute-1 kernel: ganesha.nfsd[135631]: segfault at 50 ip 00007f44c079232e sp 00007f447cff8210 error 4 in libntirpc.so.5.8[7f44c0777000+2c000] likely on CPU 0 (core 0, socket 0)
Nov 24 09:37:57 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:37:57 compute-1 systemd[1]: Started Process Core Dump (PID 140411/UID 0).
Nov 24 09:37:57 compute-1 sudo[140440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsrezoymztxviharpmehndphpfjpxqil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977076.7434897-1104-59989965322906/AnsiballZ_copy.py'
Nov 24 09:37:57 compute-1 sudo[140440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:57 compute-1 sudo[140440]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:37:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:58.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:58 compute-1 sudo[140593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlqdmdqspxhuyjeufjnfpzuqfyuswete ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977078.2960045-1155-257005166294766/AnsiballZ_container_config_data.py'
Nov 24 09:37:58 compute-1 sudo[140593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:37:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:58.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:59 compute-1 ceph-mon[80009]: pgmap v322: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 0 B/s wr, 140 op/s
Nov 24 09:37:59 compute-1 systemd-coredump[140413]: Process 125772 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 58:
                                                    #0  0x00007f44c079232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 24 09:37:59 compute-1 systemd[1]: systemd-coredump@3-140411-0.service: Deactivated successfully.
Nov 24 09:37:59 compute-1 systemd[1]: systemd-coredump@3-140411-0.service: Consumed 1.225s CPU time.
Nov 24 09:37:59 compute-1 python3.9[140595]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 24 09:37:59 compute-1 sudo[140593]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:59 compute-1 podman[140601]: 2025-11-24 09:37:59.361282502 +0000 UTC m=+0.028779527 container died 6cabd88b91f4fd1437b7ff52ddca5cb05345de4c636d0778a8357b125db16eaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:37:59 compute-1 systemd[1]: var-lib-containers-storage-overlay-e32851b5a2e93b9365e0cf37abccf059ff991f45f064509199c8a8139824910b-merged.mount: Deactivated successfully.
Nov 24 09:37:59 compute-1 podman[140601]: 2025-11-24 09:37:59.402326316 +0000 UTC m=+0.069823321 container remove 6cabd88b91f4fd1437b7ff52ddca5cb05345de4c636d0778a8357b125db16eaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 24 09:37:59 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:37:59 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 09:37:59 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.620s CPU time.
Nov 24 09:38:00 compute-1 sudo[140796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qopulubupcrdtjtevfhfuqriqlamxsog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977079.6280293-1182-236425526075311/AnsiballZ_container_config_hash.py'
Nov 24 09:38:00 compute-1 sudo[140796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:00 compute-1 python3.9[140798]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 09:38:00 compute-1 sudo[140796]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:38:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:38:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:00.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:00 compute-1 ceph-mon[80009]: pgmap v323: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 0 B/s wr, 140 op/s
Nov 24 09:38:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:38:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:38:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:00.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:38:01 compute-1 sudo[140948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imysnpvoviqbeysjznfllsttxvddjwat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977080.6129782-1209-111927028139870/AnsiballZ_podman_container_info.py'
Nov 24 09:38:01 compute-1 sudo[140948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:01 compute-1 sudo[140951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:38:01 compute-1 sudo[140951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:38:01 compute-1 sudo[140951]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:01 compute-1 python3.9[140950]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 24 09:38:01 compute-1 sudo[140948]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:02.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:02 compute-1 ceph-mon[80009]: pgmap v324: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 0 B/s wr, 140 op/s
Nov 24 09:38:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:38:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:02.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:03 compute-1 sudo[141153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlifvsbfsuwvwqxlgutwsysbhcbvqfyl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763977082.433377-1248-252711163412796/AnsiballZ_edpm_container_manage.py'
Nov 24 09:38:03 compute-1 sudo[141153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:03 compute-1 python3[141155]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 09:38:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093803 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:38:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:04.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:04 compute-1 ceph-mon[80009]: pgmap v325: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 0 B/s wr, 134 op/s
Nov 24 09:38:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:04.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:38:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:06.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:38:06 compute-1 podman[141218]: 2025-11-24 09:38:06.769667774 +0000 UTC m=+0.498913465 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 09:38:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:38:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:07.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:38:07 compute-1 ceph-mon[80009]: pgmap v326: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 0 B/s wr, 134 op/s
Nov 24 09:38:07 compute-1 sudo[141259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:38:07 compute-1 sudo[141259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:38:07 compute-1 sudo[141259]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:07 compute-1 sudo[141284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:38:07 compute-1 sudo[141284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:38:07 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:38:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:08.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:38:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:09.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:38:09 compute-1 ceph-mon[80009]: pgmap v327: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 0 B/s wr, 134 op/s
Nov 24 09:38:09 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 4.
Nov 24 09:38:09 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:38:09 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.620s CPU time.
Nov 24 09:38:09 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:38:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:10.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:10 compute-1 sudo[141284]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:11 compute-1 ceph-mon[80009]: pgmap v328: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:38:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:38:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:11.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:38:11 compute-1 podman[141434]: 2025-11-24 09:38:11.901472511 +0000 UTC m=+0.043313749 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:38:12 compute-1 podman[141434]: 2025-11-24 09:38:12.169217671 +0000 UTC m=+0.311058889 container create 679112fde20091df69e7ec390984a19f2940b3f9ab05818fbcda8617354fdc82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:38:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc9e1a38f1f877fde7085e5844e3e0c94cc5d0c869a65827de724ee9e26bcda/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:38:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc9e1a38f1f877fde7085e5844e3e0c94cc5d0c869a65827de724ee9e26bcda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:38:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc9e1a38f1f877fde7085e5844e3e0c94cc5d0c869a65827de724ee9e26bcda/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:38:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc9e1a38f1f877fde7085e5844e3e0c94cc5d0c869a65827de724ee9e26bcda/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:38:12 compute-1 podman[141434]: 2025-11-24 09:38:12.248975071 +0000 UTC m=+0.390816289 container init 679112fde20091df69e7ec390984a19f2940b3f9ab05818fbcda8617354fdc82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:38:12 compute-1 podman[141434]: 2025-11-24 09:38:12.25469336 +0000 UTC m=+0.396534578 container start 679112fde20091df69e7ec390984a19f2940b3f9ab05818fbcda8617354fdc82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 09:38:12 compute-1 bash[141434]: 679112fde20091df69e7ec390984a19f2940b3f9ab05818fbcda8617354fdc82
Nov 24 09:38:12 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:12 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:38:12 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:12 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:38:12 compute-1 podman[141168]: 2025-11-24 09:38:12.264658591 +0000 UTC m=+8.879166169 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 09:38:12 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:38:12 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:12 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:38:12 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:12 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:38:12 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:12 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:38:12 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:12 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:38:12 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:12 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:38:12 compute-1 podman[141492]: 2025-11-24 09:38:12.41874347 +0000 UTC m=+0.058118717 container create 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:38:12 compute-1 podman[141492]: 2025-11-24 09:38:12.381810216 +0000 UTC m=+0.021185493 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 09:38:12 compute-1 python3[141155]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 09:38:12 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:12 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:38:12 compute-1 sudo[141153]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:12.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:38:12 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:38:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:38:12 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:38:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:38:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:38:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:38:12 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:38:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:38:12 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:38:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:38:12 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:38:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:38:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:13.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:13 compute-1 ceph-mon[80009]: pgmap v329: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:38:13 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:38:13 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:38:13 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:38:13 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:38:13 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:38:13 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:38:13 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:38:13 compute-1 sudo[141702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejgwdkftkhcybkiotsshmuacpdhzuwvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977093.2108989-1272-226366641911465/AnsiballZ_stat.py'
Nov 24 09:38:13 compute-1 sudo[141702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:13 compute-1 python3.9[141704]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:38:13 compute-1 sudo[141702]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:14 compute-1 ceph-mon[80009]: pgmap v330: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:38:14 compute-1 sudo[141857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yifbnktnegzttxuhinjjfpmpwnapdbba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977094.1828291-1299-210700570381050/AnsiballZ_file.py'
Nov 24 09:38:14 compute-1 sudo[141857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:14.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:14 compute-1 python3.9[141859]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:14 compute-1 sudo[141857]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:14 compute-1 sudo[141933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmxjwkrzdwyqkwqrqanzosckiffoyaah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977094.1828291-1299-210700570381050/AnsiballZ_stat.py'
Nov 24 09:38:14 compute-1 sudo[141933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:15.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:15 compute-1 python3.9[141935]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:38:15 compute-1 sudo[141933]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:38:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:38:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:38:15 compute-1 sudo[142084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbzdumhgitfworccsxcuqhqudkciddhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977095.215669-1299-3560948753067/AnsiballZ_copy.py'
Nov 24 09:38:15 compute-1 sudo[142084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:15 compute-1 python3.9[142086]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763977095.215669-1299-3560948753067/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:15 compute-1 sudo[142084]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:16 compute-1 sudo[142161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjimrtokitiztmvlrqlvnyoapxtaslhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977095.215669-1299-3560948753067/AnsiballZ_systemd.py'
Nov 24 09:38:16 compute-1 sudo[142161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:16 compute-1 ceph-mon[80009]: pgmap v331: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:38:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:16.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:16 compute-1 python3.9[142163]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 09:38:16 compute-1 systemd[1]: Reloading.
Nov 24 09:38:16 compute-1 systemd-rc-local-generator[142191]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:38:16 compute-1 systemd-sysv-generator[142195]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:38:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:17.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:17 compute-1 sudo[142161]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:17 compute-1 sudo[142272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywosdcbdbcgjmsjxgzhwmlyailteykbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977095.215669-1299-3560948753067/AnsiballZ_systemd.py'
Nov 24 09:38:17 compute-1 sudo[142272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:17 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:38:17 compute-1 python3.9[142274]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:38:17 compute-1 systemd[1]: Reloading.
Nov 24 09:38:17 compute-1 systemd-rc-local-generator[142304]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:38:17 compute-1 systemd-sysv-generator[142307]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:38:17 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:38:18 compute-1 systemd[1]: Starting ovn_metadata_agent container...
Nov 24 09:38:18 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:38:18 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:38:18 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9f713fb5a8a62c677df2ae45d949700fe56a36660790c930a7d72c59f7bc7c3/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 24 09:38:18 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9f713fb5a8a62c677df2ae45d949700fe56a36660790c930a7d72c59f7bc7c3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 09:38:18 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b.
Nov 24 09:38:18 compute-1 podman[142316]: 2025-11-24 09:38:18.160813825 +0000 UTC m=+0.109041830 container init 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: + sudo -E kolla_set_configs
Nov 24 09:38:18 compute-1 podman[142316]: 2025-11-24 09:38:18.182501731 +0000 UTC m=+0.130729716 container start 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 09:38:18 compute-1 edpm-start-podman-container[142316]: ovn_metadata_agent
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: INFO:__main__:Validating config file
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: INFO:__main__:Copying service configuration files
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: INFO:__main__:Writing out command to execute
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: ++ cat /run_command
Nov 24 09:38:18 compute-1 edpm-start-podman-container[142315]: Creating additional drop-in dependency for "ovn_metadata_agent" (6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b)
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: + CMD=neutron-ovn-metadata-agent
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: + ARGS=
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: + sudo kolla_copy_cacerts
Nov 24 09:38:18 compute-1 podman[142338]: 2025-11-24 09:38:18.246149641 +0000 UTC m=+0.054068929 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: + [[ ! -n '' ]]
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: + . kolla_extend_start
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: Running command: 'neutron-ovn-metadata-agent'
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: + umask 0022
Nov 24 09:38:18 compute-1 ovn_metadata_agent[142331]: + exec neutron-ovn-metadata-agent
Nov 24 09:38:18 compute-1 systemd[1]: Reloading.
Nov 24 09:38:18 compute-1 systemd-rc-local-generator[142405]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:38:18 compute-1 systemd-sysv-generator[142410]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:38:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:18 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:38:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:18 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:38:18 compute-1 systemd[1]: Started ovn_metadata_agent container.
Nov 24 09:38:18 compute-1 sudo[142272]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:18.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:18 compute-1 sudo[142444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:38:18 compute-1 sudo[142444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:38:18 compute-1 sudo[142444]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:19.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:19 compute-1 ceph-mon[80009]: pgmap v332: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:38:19 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:38:19 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:38:19 compute-1 sshd-session[133602]: Connection closed by 192.168.122.30 port 47390
Nov 24 09:38:19 compute-1 sshd-session[133599]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:38:19 compute-1 systemd[1]: session-51.scope: Deactivated successfully.
Nov 24 09:38:19 compute-1 systemd[1]: session-51.scope: Consumed 55.077s CPU time.
Nov 24 09:38:19 compute-1 systemd-logind[823]: Session 51 logged out. Waiting for processes to exit.
Nov 24 09:38:19 compute-1 systemd-logind[823]: Removed session 51.
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.002 142336 INFO neutron.common.config [-] Logging enabled!
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.003 142336 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.003 142336 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.003 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.003 142336 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.003 142336 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.003 142336 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.004 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.004 142336 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.004 142336 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.004 142336 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.004 142336 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.004 142336 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.004 142336 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.004 142336 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.004 142336 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.005 142336 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.005 142336 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.005 142336 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.005 142336 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.005 142336 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.005 142336 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.005 142336 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.005 142336 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.005 142336 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.005 142336 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.006 142336 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.006 142336 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.006 142336 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.006 142336 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.006 142336 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.006 142336 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.006 142336 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.006 142336 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.006 142336 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.007 142336 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.007 142336 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.007 142336 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.007 142336 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.007 142336 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.007 142336 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.007 142336 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.007 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.007 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.008 142336 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.008 142336 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.008 142336 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.008 142336 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.008 142336 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.008 142336 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.008 142336 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.008 142336 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.008 142336 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.008 142336 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.009 142336 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.009 142336 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.009 142336 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.009 142336 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.009 142336 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.009 142336 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.009 142336 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.009 142336 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.009 142336 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.009 142336 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.010 142336 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.010 142336 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.010 142336 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.010 142336 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.010 142336 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.010 142336 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.010 142336 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.010 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.010 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.011 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.011 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.011 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.011 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.011 142336 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.011 142336 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.011 142336 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.011 142336 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.011 142336 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.011 142336 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.012 142336 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.012 142336 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.012 142336 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.012 142336 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.012 142336 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.012 142336 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.012 142336 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.012 142336 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.013 142336 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.013 142336 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.013 142336 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.013 142336 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.013 142336 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.013 142336 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.013 142336 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.013 142336 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.013 142336 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.013 142336 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.014 142336 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.014 142336 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.014 142336 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.014 142336 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.014 142336 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.014 142336 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.014 142336 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.014 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.014 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.015 142336 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.015 142336 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.015 142336 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.015 142336 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.015 142336 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.015 142336 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.015 142336 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.015 142336 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.015 142336 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.016 142336 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.016 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.016 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.016 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.016 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.016 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.016 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.016 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.016 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.017 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.017 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.017 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.017 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.017 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.017 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.017 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.017 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.017 142336 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.018 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.018 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.018 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.018 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.018 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.018 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.018 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.018 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.018 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.019 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.019 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.019 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.019 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.019 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.019 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.019 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.019 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.019 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.019 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.020 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.020 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.020 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.020 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.020 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.020 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.020 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.020 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.021 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.021 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.021 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.021 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.021 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.021 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.021 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.021 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.021 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.022 142336 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.022 142336 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.022 142336 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.022 142336 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.022 142336 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.022 142336 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.022 142336 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.022 142336 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.022 142336 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.022 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.023 142336 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.023 142336 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.023 142336 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.023 142336 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.023 142336 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.023 142336 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.023 142336 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.023 142336 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.023 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.024 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.024 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.024 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.024 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.024 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.024 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.024 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.024 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.024 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.024 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.025 142336 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.025 142336 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.025 142336 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.025 142336 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.025 142336 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.025 142336 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.025 142336 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.025 142336 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.025 142336 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.026 142336 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.026 142336 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.026 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.026 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.026 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.026 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.026 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.026 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.026 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.026 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.027 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.027 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.027 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.027 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.027 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.027 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.027 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.027 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.027 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.027 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.028 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.028 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.028 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.028 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.028 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.028 142336 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.028 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.028 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.028 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.029 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.029 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.029 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.029 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.029 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.029 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.029 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.029 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.029 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.029 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.030 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.030 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.030 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.030 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.030 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.030 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.030 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.030 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.030 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.031 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.031 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.031 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.031 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.031 142336 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.031 142336 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.031 142336 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.031 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.031 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.032 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.032 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.032 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.032 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.032 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.032 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.032 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.032 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.032 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.033 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.033 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.033 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.033 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.033 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.033 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.033 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.033 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.033 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.033 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.034 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.034 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.034 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.034 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.034 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.034 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.034 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.034 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.034 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.035 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.035 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.035 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.035 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.035 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.035 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.035 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.035 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.043 142336 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.043 142336 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.043 142336 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.044 142336 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.044 142336 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.056 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 803b139a-7fca-4549-8597-645cf677225d (UUID: 803b139a-7fca-4549-8597-645cf677225d) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.077 142336 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.077 142336 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.077 142336 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.077 142336 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.080 142336 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.087 142336 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.094 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '803b139a-7fca-4549-8597-645cf677225d'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], external_ids={}, name=803b139a-7fca-4549-8597-645cf677225d, nb_cfg_timestamp=1763977033475, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.095 142336 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f5c78675f70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.096 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.096 142336 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.096 142336 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.096 142336 INFO oslo_service.service [-] Starting 1 workers
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.100 142336 DEBUG oslo_service.service [-] Started child 142471 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.103 142471 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-955359'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.103 142336 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp0o2q9_m5/privsep.sock']
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.123 142471 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.123 142471 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.123 142471 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.126 142471 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.132 142471 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.139 142471 INFO eventlet.wsgi.server [-] (142471) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Nov 24 09:38:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:38:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:20.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:38:20 compute-1 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.758 142336 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.758 142336 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp0o2q9_m5/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.640 142476 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.644 142476 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.648 142476 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.649 142476 INFO oslo.privsep.daemon [-] privsep daemon running as pid 142476
Nov 24 09:38:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.761 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[90395d3a-65b0-4b47-a7a7-554ba30cd0e6]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:38:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:21.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:21 compute-1 ceph-mon[80009]: pgmap v333: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:38:21 compute-1 sudo[142481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:38:21 compute-1 sudo[142481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:38:21 compute-1 sudo[142481]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.257 142476 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.257 142476 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.257 142476 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.789 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a2d89e-80db-49e4-858c-5cdcc567210a]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.791 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, column=external_ids, values=({'neutron:ovn-metadata-id': 'f0c01ca3-a3f1-5efc-8a96-3a1db00f23b0'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.813 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.819 142336 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.819 142336 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.819 142336 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.819 142336 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.819 142336 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.819 142336 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.820 142336 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.820 142336 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.820 142336 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.820 142336 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.820 142336 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.820 142336 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.820 142336 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.820 142336 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.820 142336 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.821 142336 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.821 142336 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.821 142336 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.821 142336 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.821 142336 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.821 142336 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.821 142336 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.821 142336 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.822 142336 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.822 142336 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.822 142336 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.822 142336 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.822 142336 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.822 142336 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.822 142336 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.822 142336 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.823 142336 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.823 142336 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.823 142336 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.823 142336 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.823 142336 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.823 142336 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.823 142336 DEBUG oslo_service.service [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.823 142336 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.824 142336 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.824 142336 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.824 142336 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.824 142336 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.824 142336 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.824 142336 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.824 142336 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.824 142336 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.824 142336 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.824 142336 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.824 142336 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.825 142336 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.825 142336 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.825 142336 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.825 142336 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.825 142336 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.825 142336 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.825 142336 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.825 142336 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.825 142336 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.826 142336 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.826 142336 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.826 142336 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.826 142336 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.826 142336 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.826 142336 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.826 142336 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.827 142336 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.827 142336 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.827 142336 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.827 142336 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.827 142336 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.827 142336 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.827 142336 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.827 142336 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.828 142336 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.828 142336 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.828 142336 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.828 142336 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.828 142336 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.828 142336 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.828 142336 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.828 142336 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.828 142336 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.829 142336 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.829 142336 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.829 142336 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.829 142336 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.829 142336 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.829 142336 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.829 142336 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.829 142336 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.829 142336 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.830 142336 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.830 142336 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.830 142336 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.830 142336 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.830 142336 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.830 142336 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.830 142336 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.830 142336 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.830 142336 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.831 142336 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.831 142336 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.831 142336 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.831 142336 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.831 142336 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.831 142336 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.831 142336 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.831 142336 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.831 142336 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.832 142336 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.832 142336 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.832 142336 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.832 142336 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.832 142336 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.832 142336 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.832 142336 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.832 142336 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.832 142336 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.833 142336 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.833 142336 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.833 142336 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.833 142336 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.833 142336 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.833 142336 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.833 142336 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.833 142336 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.834 142336 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.834 142336 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.834 142336 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.834 142336 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.834 142336 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.834 142336 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.834 142336 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.834 142336 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.834 142336 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.835 142336 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.835 142336 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.835 142336 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.835 142336 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.835 142336 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.835 142336 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.835 142336 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.835 142336 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.835 142336 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.835 142336 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.836 142336 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.836 142336 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.836 142336 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.836 142336 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.836 142336 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.836 142336 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.836 142336 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.836 142336 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.836 142336 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.836 142336 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.837 142336 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.837 142336 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.837 142336 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.837 142336 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.837 142336 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.837 142336 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.837 142336 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.837 142336 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.837 142336 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.837 142336 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.838 142336 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.838 142336 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.838 142336 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.838 142336 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.838 142336 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.838 142336 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.838 142336 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.838 142336 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.838 142336 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.838 142336 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.839 142336 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.839 142336 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.839 142336 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.839 142336 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.839 142336 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.839 142336 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.839 142336 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.839 142336 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.840 142336 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.840 142336 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.840 142336 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.840 142336 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.840 142336 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.840 142336 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.840 142336 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.840 142336 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.840 142336 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.840 142336 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.841 142336 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.841 142336 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.841 142336 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.841 142336 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.841 142336 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.841 142336 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.841 142336 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.841 142336 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.841 142336 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.841 142336 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.842 142336 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.842 142336 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.842 142336 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.842 142336 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.842 142336 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.842 142336 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.842 142336 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.842 142336 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.842 142336 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.842 142336 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.843 142336 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.843 142336 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.843 142336 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.843 142336 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.843 142336 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.843 142336 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.843 142336 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.843 142336 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.843 142336 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.843 142336 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.844 142336 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.844 142336 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.844 142336 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.844 142336 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.844 142336 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.844 142336 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.844 142336 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.844 142336 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.844 142336 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.844 142336 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.845 142336 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.845 142336 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.845 142336 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.845 142336 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.845 142336 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.845 142336 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.845 142336 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.845 142336 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.845 142336 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.846 142336 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.846 142336 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.846 142336 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.846 142336 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.846 142336 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.846 142336 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.846 142336 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.846 142336 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.846 142336 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.846 142336 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.847 142336 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.847 142336 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.847 142336 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.847 142336 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.847 142336 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.847 142336 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.847 142336 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.847 142336 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.847 142336 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.847 142336 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.848 142336 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.848 142336 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.848 142336 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.848 142336 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.848 142336 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.848 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.848 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.848 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.848 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.849 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.849 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.849 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.849 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.849 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.849 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.849 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.849 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.849 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.850 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.850 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.850 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.850 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.850 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.850 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.850 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.850 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.850 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.851 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.851 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.851 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.851 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.851 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.851 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.851 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.851 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.851 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.851 142336 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.852 142336 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.852 142336 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.852 142336 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.852 142336 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 24 09:38:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:22.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:22 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:38:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:23.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:23 compute-1 ceph-mon[80009]: pgmap v334: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Nov 24 09:38:24 compute-1 ceph-mon[80009]: pgmap v335: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093824 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:38:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:38:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:24.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:25.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:25 compute-1 sshd-session[142520]: Accepted publickey for zuul from 192.168.122.30 port 59306 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:38:25 compute-1 systemd-logind[823]: New session 52 of user zuul.
Nov 24 09:38:25 compute-1 systemd[1]: Started Session 52 of User zuul.
Nov 24 09:38:25 compute-1 sshd-session[142520]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:38:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:25 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae44000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:25 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae38001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:26 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:26 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae20000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:26 compute-1 python3.9[142678]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:38:26 compute-1 ceph-mon[80009]: pgmap v336: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Nov 24 09:38:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:26.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:27.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:27 compute-1 sudo[142832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlfrzvwgxarwexacpxodzjhfvcihzfqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977107.0564408-63-67889896395416/AnsiballZ_command.py'
Nov 24 09:38:27 compute-1 sudo[142832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:27 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae18000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093827 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:38:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:27 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae30000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:27 compute-1 python3.9[142834]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:38:27 compute-1 sudo[142832]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:38:28 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:28 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae30000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:38:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:28.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:38:28 compute-1 ceph-mon[80009]: pgmap v337: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:38:28 compute-1 sudo[142998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkozantuwobzzwzrrpwwxtudcdfpmxyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977108.2890017-96-256313991341996/AnsiballZ_systemd_service.py'
Nov 24 09:38:28 compute-1 sudo[142998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:29.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:29 compute-1 python3.9[143000]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 09:38:29 compute-1 systemd[1]: Reloading.
Nov 24 09:38:29 compute-1 systemd-rc-local-generator[143028]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:38:29 compute-1 systemd-sysv-generator[143031]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:38:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:29 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:29 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:29 compute-1 sudo[142998]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:30 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:30 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae30000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:30 compute-1 python3.9[143186]: ansible-ansible.builtin.service_facts Invoked
Nov 24 09:38:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:38:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:38:30 compute-1 network[143203]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 09:38:30 compute-1 network[143204]: 'network-scripts' will be removed from distribution in near future.
Nov 24 09:38:30 compute-1 network[143205]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 09:38:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:30.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:30 compute-1 ceph-mon[80009]: pgmap v338: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Nov 24 09:38:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:38:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:38:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:31.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:38:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:31 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae30000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:31 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:32 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:32.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:32 : epoch 69242784 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:38:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:38:32 compute-1 ceph-mon[80009]: pgmap v339: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:38:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:33.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:33 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae30000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:33 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae30000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:34 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:34 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:38:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:34.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:38:34 compute-1 ceph-mon[80009]: pgmap v340: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Nov 24 09:38:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:35.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:35 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:35 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae30000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:35 : epoch 69242784 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:38:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:35 : epoch 69242784 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:38:35 compute-1 sudo[143468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaeowsnbrksjacbvnbruyazqddgcouvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977115.6926968-153-71203216734492/AnsiballZ_systemd_service.py'
Nov 24 09:38:35 compute-1 sudo[143468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:36 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:36 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae380026e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:36 compute-1 python3.9[143470]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:38:36 compute-1 sudo[143468]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:36.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:36 compute-1 sudo[143621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwsmvyywarfmfkmquxsffwdwpazfjhpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977116.448798-153-223333305765561/AnsiballZ_systemd_service.py'
Nov 24 09:38:36 compute-1 sudo[143621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:36 compute-1 ceph-mon[80009]: pgmap v341: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Nov 24 09:38:36 compute-1 python3.9[143623]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:38:37 compute-1 sudo[143621]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:37.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:37 compute-1 sudo[143774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjqxiyvrscxvwwjozvkwoizvdbtehftd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977117.151901-153-180629021471893/AnsiballZ_systemd_service.py'
Nov 24 09:38:37 compute-1 sudo[143774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:37 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae380026e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:37 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae18002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:37 compute-1 python3.9[143776]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:38:37 compute-1 sudo[143774]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:37 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:38:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:38 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae30003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:38 compute-1 sudo[143928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hamamelvnbiodrkjmrnxgqownhhhvmtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977117.901115-153-61018350565966/AnsiballZ_systemd_service.py'
Nov 24 09:38:38 compute-1 sudo[143928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:38 compute-1 python3.9[143930]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:38:38 compute-1 sudo[143928]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:38.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:38 : epoch 69242784 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:38:38 compute-1 sudo[144081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gllgubbblpwclikeuwyrjyrmbrppdvus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977118.6522908-153-59970851739507/AnsiballZ_systemd_service.py'
Nov 24 09:38:38 compute-1 sudo[144081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:38:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:39.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:38:39 compute-1 ceph-mon[80009]: pgmap v342: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:38:39 compute-1 python3.9[144083]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:38:39 compute-1 sudo[144081]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:39 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae20002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:39 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae380033f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:39 compute-1 sudo[144234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmjollqtbxfyyrlycijvqxxlwmcsjrmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977119.3604708-153-170622647235293/AnsiballZ_systemd_service.py'
Nov 24 09:38:39 compute-1 sudo[144234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:39 compute-1 python3.9[144236]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:38:39 compute-1 sudo[144234]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:40 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:40 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae18002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:40 compute-1 sudo[144398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okvjiptxngwhbfejihkreobufebcbjca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977120.1342707-153-120513779457720/AnsiballZ_systemd_service.py'
Nov 24 09:38:40 compute-1 sudo[144398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:40 compute-1 podman[144362]: 2025-11-24 09:38:40.457115701 +0000 UTC m=+0.101801367 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:38:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:38:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:40.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:38:40 compute-1 python3.9[144404]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:38:40 compute-1 sudo[144398]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:41.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:41 compute-1 ceph-mon[80009]: pgmap v343: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:38:41 compute-1 sudo[144439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:38:41 compute-1 sudo[144439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:38:41 compute-1 sudo[144439]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:41 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae30003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:41 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae30003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:42 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae380033f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:42 compute-1 sudo[144590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auxfkkqilmqtfnllmtlnverxykojmind ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977122.2076056-309-189112305912962/AnsiballZ_file.py'
Nov 24 09:38:42 compute-1 sudo[144590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:42.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:42 compute-1 python3.9[144592]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:42 compute-1 sudo[144590]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:42 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:38:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:38:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:43.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:38:43 compute-1 sudo[144742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfqprdmlztdgwrnryifsgjfxacaroxxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977123.0141437-309-244433778380085/AnsiballZ_file.py'
Nov 24 09:38:43 compute-1 sudo[144742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:43 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae18002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:43 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae20003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:43 compute-1 ceph-mon[80009]: pgmap v344: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:38:44 compute-1 python3.9[144744]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:44 compute-1 sudo[144742]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:44 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae30003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093844 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:38:44 compute-1 sudo[144896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rztayuqmecokcvzqkmqmniaemwtwsnua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977124.18432-309-234084685876356/AnsiballZ_file.py'
Nov 24 09:38:44 compute-1 sudo[144896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:44.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:44 compute-1 python3.9[144898]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:44 compute-1 sudo[144896]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:45 compute-1 ceph-mon[80009]: pgmap v345: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:38:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:45.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:45 compute-1 sudo[145048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohkciwtfaztghlxpmdsywkpyljisrlhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977124.8319008-309-58924815176830/AnsiballZ_file.py'
Nov 24 09:38:45 compute-1 sudo[145048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:45 compute-1 python3.9[145050]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:45 compute-1 sudo[145048]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:38:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:38:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:45 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae30003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:45 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae18003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:45 compute-1 sudo[145200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cltypcuftgefazzipcmiixdfdbncitry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977125.5280282-309-28711233384006/AnsiballZ_file.py'
Nov 24 09:38:45 compute-1 sudo[145200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:46 compute-1 python3.9[145202]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:38:46 compute-1 sudo[145200]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:46 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae20003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:46 compute-1 sudo[145353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiuovylbtpomjauxireopfbllhigyskk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977126.237346-309-13649679811019/AnsiballZ_file.py'
Nov 24 09:38:46 compute-1 sudo[145353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:46.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:46 compute-1 python3.9[145355]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:46 compute-1 sudo[145353]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:47 compute-1 ceph-mon[80009]: pgmap v346: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:38:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:47.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:47 compute-1 sudo[145505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzfudsrkmxgnvsfhjwqhbqmfxdvrfvdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977126.9066958-309-73858498525112/AnsiballZ_file.py'
Nov 24 09:38:47 compute-1 sudo[145505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:47 compute-1 python3.9[145507]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:47 compute-1 sudo[145505]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:47 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae380033f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:47 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae300038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:47 compute-1 sudo[145658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lguobuswxqblpafehtrekopoqxixyjfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977127.5969899-459-109774574821336/AnsiballZ_file.py'
Nov 24 09:38:47 compute-1 sudo[145658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:47 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:38:48 compute-1 python3.9[145660]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:48 compute-1 sudo[145658]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:48 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae18003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:48 compute-1 sudo[145823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qetqeemquyuszxkkiojmqpqiuaszcpdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977128.2320244-459-279989475815889/AnsiballZ_file.py'
Nov 24 09:38:48 compute-1 sudo[145823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:48 compute-1 podman[145784]: 2025-11-24 09:38:48.535259986 +0000 UTC m=+0.069481849 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:38:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:38:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:48.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:38:48 compute-1 python3.9[145831]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:48 compute-1 sudo[145823]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:38:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:49.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:38:49 compute-1 ceph-mon[80009]: pgmap v347: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:38:49 compute-1 sudo[145981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjnvrnnvkubhvnnaqiyadwbeiifvywmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977128.8359132-459-124740832546852/AnsiballZ_file.py'
Nov 24 09:38:49 compute-1 sudo[145981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:49 compute-1 python3.9[145983]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:49 compute-1 sudo[145981]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:49 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae20003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:49 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae380033f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:49 compute-1 sudo[146133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnkghvhkhmfmunyqhbzzsqvpqxukgars ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977129.5231009-459-192865581823407/AnsiballZ_file.py'
Nov 24 09:38:49 compute-1 sudo[146133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:50 compute-1 python3.9[146135]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:50 compute-1 sudo[146133]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:50 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae300038f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:50 compute-1 sudo[146286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gadtficgbbkymqtotoherlpgonktorhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977130.177386-459-248776406455874/AnsiballZ_file.py'
Nov 24 09:38:50 compute-1 sudo[146286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:38:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:50.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:38:50 compute-1 python3.9[146288]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:50 compute-1 sudo[146286]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:51.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:51 compute-1 ceph-mon[80009]: pgmap v348: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:38:51 compute-1 sudo[146438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyuggfhkipiportgvgozkugclernkkzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977130.8576229-459-251689477073131/AnsiballZ_file.py'
Nov 24 09:38:51 compute-1 sudo[146438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:51 compute-1 python3.9[146440]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:51 compute-1 sudo[146438]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:51 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae18003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:51 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae20003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:51 compute-1 sudo[146590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltbncjxmlaxmeixtknossvtvsdmwsvqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977131.4851148-459-16831231756066/AnsiballZ_file.py'
Nov 24 09:38:51 compute-1 sudo[146590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:51 compute-1 python3.9[146592]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:51 compute-1 sudo[146590]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:52 compute-1 kernel: ganesha.nfsd[142511]: segfault at 50 ip 00007faef302a32e sp 00007faec9ffa210 error 4 in libntirpc.so.5.8[7faef300f000+2c000] likely on CPU 3 (core 0, socket 3)
Nov 24 09:38:52 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:38:52 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:52 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae380044f0 fd 38 proxy ignored for local
Nov 24 09:38:52 compute-1 systemd[1]: Started Process Core Dump (PID 146618/UID 0).
Nov 24 09:38:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:52.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:38:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:53.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:53 compute-1 ceph-mon[80009]: pgmap v349: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:38:53 compute-1 sudo[146745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmqkloiizyabvkmkdtnkfmknodfemicf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977132.993407-612-151686760815293/AnsiballZ_command.py'
Nov 24 09:38:53 compute-1 sudo[146745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:53 compute-1 systemd-coredump[146619]: Process 141455 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 43:
                                                    #0  0x00007faef302a32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 24 09:38:53 compute-1 systemd[1]: systemd-coredump@4-146618-0.service: Deactivated successfully.
Nov 24 09:38:53 compute-1 systemd[1]: systemd-coredump@4-146618-0.service: Consumed 1.247s CPU time.
Nov 24 09:38:53 compute-1 python3.9[146747]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:38:53 compute-1 sudo[146745]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:53 compute-1 podman[146755]: 2025-11-24 09:38:53.569910948 +0000 UTC m=+0.029564683 container died 679112fde20091df69e7ec390984a19f2940b3f9ab05818fbcda8617354fdc82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:38:53 compute-1 systemd[1]: var-lib-containers-storage-overlay-6cc9e1a38f1f877fde7085e5844e3e0c94cc5d0c869a65827de724ee9e26bcda-merged.mount: Deactivated successfully.
Nov 24 09:38:53 compute-1 podman[146755]: 2025-11-24 09:38:53.621657092 +0000 UTC m=+0.081310827 container remove 679112fde20091df69e7ec390984a19f2940b3f9ab05818fbcda8617354fdc82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 24 09:38:53 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:38:53 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 09:38:53 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.581s CPU time.
Nov 24 09:38:54 compute-1 python3.9[146948]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 09:38:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:54.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:55 compute-1 sudo[147098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqyorgdlygchbxsqyinolhfirmalsmkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977134.705647-666-182810307297273/AnsiballZ_systemd_service.py'
Nov 24 09:38:55 compute-1 sudo[147098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:55.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:55 compute-1 ceph-mon[80009]: pgmap v350: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:38:55 compute-1 python3.9[147100]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 09:38:55 compute-1 systemd[1]: Reloading.
Nov 24 09:38:55 compute-1 systemd-rc-local-generator[147125]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:38:55 compute-1 systemd-sysv-generator[147130]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:38:56 compute-1 sudo[147098]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:56 compute-1 sudo[147285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcmrlergleymvvrksxoqectqlqfezcqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977136.3166323-690-177657871616610/AnsiballZ_command.py'
Nov 24 09:38:56 compute-1 sudo[147285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:56.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:56 compute-1 python3.9[147287]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:38:56 compute-1 sudo[147285]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:57.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:57 compute-1 ceph-mon[80009]: pgmap v351: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:38:57 compute-1 sudo[147438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geztvayiqdxoaqbwmkzpkmqowdazhfgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977136.91829-690-33097612012417/AnsiballZ_command.py'
Nov 24 09:38:57 compute-1 sudo[147438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:57 compute-1 python3.9[147440]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:38:57 compute-1 sudo[147438]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093857 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:38:57 compute-1 sudo[147591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqhkrgblarkfyvltsskjdyopafetkjme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977137.5652444-690-24326238794782/AnsiballZ_command.py'
Nov 24 09:38:57 compute-1 sudo[147591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:38:58 compute-1 python3.9[147593]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:38:58 compute-1 sudo[147591]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:58 compute-1 sudo[147745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxifquknadgdvjrempjkeodthuyrfhfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977138.1668293-690-216880245015132/AnsiballZ_command.py'
Nov 24 09:38:58 compute-1 sudo[147745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:58 compute-1 python3.9[147747]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:38:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:38:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:58.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:38:58 compute-1 sudo[147745]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:59 compute-1 sudo[147898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-begkugognyktdplxrkyekuwuyzqatjua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977138.8071964-690-130001688858001/AnsiballZ_command.py'
Nov 24 09:38:59 compute-1 sudo[147898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:38:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:59.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:59 compute-1 ceph-mon[80009]: pgmap v352: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:38:59 compute-1 python3.9[147900]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:38:59 compute-1 sudo[147898]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:59 compute-1 sudo[148051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evmkonvezivxcsslcasjvtceaooocptp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977139.387931-690-67873657940264/AnsiballZ_command.py'
Nov 24 09:38:59 compute-1 sudo[148051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:59 compute-1 python3.9[148053]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:38:59 compute-1 sudo[148051]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:00 compute-1 sudo[148205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sznfhzxpayrtplwjjsdoterhaykgehgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977139.9693904-690-78265241900315/AnsiballZ_command.py'
Nov 24 09:39:00 compute-1 sudo[148205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:39:00 compute-1 python3.9[148207]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:39:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:39:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:39:00 compute-1 sudo[148205]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:39:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:00.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:39:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:01.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:01 compute-1 ceph-mon[80009]: pgmap v353: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:39:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:39:01 compute-1 sudo[148233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:39:01 compute-1 sudo[148233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:39:01 compute-1 sudo[148233]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:02.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:39:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:03.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:03 compute-1 ceph-mon[80009]: pgmap v354: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:39:03 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 5.
Nov 24 09:39:03 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:39:03 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.581s CPU time.
Nov 24 09:39:03 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:39:04 compute-1 podman[148380]: 2025-11-24 09:39:04.208472445 +0000 UTC m=+0.049717725 container create 72f08d8220aebf8a177859741b959495fc0d990644c83dbaa6c96d6a6ae331e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:39:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca848b0992b5514535531e6a96c6a662d50b4b69145fd2e2f67a3d9272fea15d/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:39:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca848b0992b5514535531e6a96c6a662d50b4b69145fd2e2f67a3d9272fea15d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:39:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca848b0992b5514535531e6a96c6a662d50b4b69145fd2e2f67a3d9272fea15d/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:39:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca848b0992b5514535531e6a96c6a662d50b4b69145fd2e2f67a3d9272fea15d/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:39:04 compute-1 podman[148380]: 2025-11-24 09:39:04.270951541 +0000 UTC m=+0.112196821 container init 72f08d8220aebf8a177859741b959495fc0d990644c83dbaa6c96d6a6ae331e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 24 09:39:04 compute-1 podman[148380]: 2025-11-24 09:39:04.277450909 +0000 UTC m=+0.118696179 container start 72f08d8220aebf8a177859741b959495fc0d990644c83dbaa6c96d6a6ae331e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:39:04 compute-1 podman[148380]: 2025-11-24 09:39:04.183888165 +0000 UTC m=+0.025133485 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:39:04 compute-1 bash[148380]: 72f08d8220aebf8a177859741b959495fc0d990644c83dbaa6c96d6a6ae331e3
Nov 24 09:39:04 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:04 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:39:04 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:04 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:39:04 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:39:04 compute-1 sudo[148449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqczfrccoibxdjpzimaewobipgdlljmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977143.9173388-852-84075921813303/AnsiballZ_getent.py'
Nov 24 09:39:04 compute-1 sudo[148449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:39:04 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:04 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:39:04 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:04 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:39:04 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:04 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:39:04 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:04 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:39:04 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:04 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:39:04 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:04 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:39:04 compute-1 python3.9[148466]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 24 09:39:04 compute-1 sudo[148449]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:04.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:05.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:05 compute-1 sudo[148639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iukmmwdaelewtdhzoxcrnnelthlwcvho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977144.7854776-876-213209994743762/AnsiballZ_group.py'
Nov 24 09:39:05 compute-1 sudo[148639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:39:05 compute-1 ceph-mon[80009]: pgmap v355: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:39:05 compute-1 python3.9[148641]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 09:39:05 compute-1 groupadd[148642]: group added to /etc/group: name=libvirt, GID=42473
Nov 24 09:39:05 compute-1 groupadd[148642]: group added to /etc/gshadow: name=libvirt
Nov 24 09:39:05 compute-1 groupadd[148642]: new group: name=libvirt, GID=42473
Nov 24 09:39:05 compute-1 sudo[148639]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:06 compute-1 sudo[148798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfqexamhqxybfdqtecclmvasothwrift ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977145.7856853-900-184541676799460/AnsiballZ_user.py'
Nov 24 09:39:06 compute-1 sudo[148798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:39:06 compute-1 python3.9[148800]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 09:39:06 compute-1 useradd[148802]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Nov 24 09:39:06 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 09:39:06 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 09:39:06 compute-1 sudo[148798]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:39:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:06.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:39:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:07.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:07 compute-1 ceph-mon[80009]: pgmap v356: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:39:07 compute-1 sudo[148959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiwtuuutbhyyvqctzxujgkmkzhlksyuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977147.0772002-933-265802451171241/AnsiballZ_setup.py'
Nov 24 09:39:07 compute-1 sudo[148959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:39:07 compute-1 python3.9[148961]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:39:07 compute-1 sudo[148959]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:07 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:39:08 compute-1 sudo[149044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sccvqxalkbfoxfcmceymannmuretzpfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977147.0772002-933-265802451171241/AnsiballZ_dnf.py'
Nov 24 09:39:08 compute-1 sudo[149044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:39:08 compute-1 ceph-mon[80009]: pgmap v357: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:39:08 compute-1 python3.9[149046]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:39:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:39:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:08.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:39:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:39:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:09.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:39:10 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:10 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:39:10 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:10 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:39:10 compute-1 ceph-mon[80009]: pgmap v358: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:39:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:39:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:10.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:39:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:39:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:11.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:39:11 compute-1 podman[149057]: 2025-11-24 09:39:11.329314169 +0000 UTC m=+0.071792464 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 09:39:12 compute-1 ceph-mon[80009]: pgmap v359: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:39:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:12.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:39:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:13.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:14.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:14 compute-1 ceph-mon[80009]: pgmap v360: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:39:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:15.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:39:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:39:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:39:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:39:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:39:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:16.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:39:16 compute-1 ceph-mon[80009]: pgmap v361: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:39:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:17.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:17 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9104000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:17 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:17 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:39:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:18 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:18.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:18 compute-1 ceph-mon[80009]: pgmap v362: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:39:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:19.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:19 compute-1 sudo[149276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:39:19 compute-1 sudo[149276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:39:19 compute-1 sudo[149276]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:19 compute-1 podman[149300]: 2025-11-24 09:39:19.313365325 +0000 UTC m=+0.052122213 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 09:39:19 compute-1 sudo[149307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:39:19 compute-1 sudo[149307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:39:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:19 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093919 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:39:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:19 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:19 compute-1 sudo[149307]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:39:20.037 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:39:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:39:20.038 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:39:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:39:20.038 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:39:20 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:39:20 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:39:20 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:39:20 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:39:20 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:39:20 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:39:20 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:20 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:20 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:39:20 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:39:20 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:39:20 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:39:20 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:39:20 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:39:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:20.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:20 compute-1 ceph-mon[80009]: pgmap v363: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:39:20 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:39:20 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:39:20 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:39:20 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:39:20 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:39:20 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:39:20 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:39:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:21.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:21 compute-1 sudo[149375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:39:21 compute-1 sudo[149375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:39:21 compute-1 sudo[149375]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:21 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:21 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:22 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:22 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f00016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:22 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093922 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:39:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:22.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:22 compute-1 ceph-mon[80009]: pgmap v364: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:39:22 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:39:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093923 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:39:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:23.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:23 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:23 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:24 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:24.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:39:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:39:24 compute-1 ceph-mon[80009]: pgmap v365: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:39:24 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:39:24 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:39:24 compute-1 sudo[149408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:39:24 compute-1 sudo[149408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:39:24 compute-1 sudo[149408]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:39:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:25.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:39:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:25 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f00021c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:25 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:26 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:26 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:39:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:26.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:39:26 compute-1 ceph-mon[80009]: pgmap v366: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:39:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:27.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:27 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:27 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f00021c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:39:28 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:28 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:39:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:28.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:39:28 compute-1 ceph-mon[80009]: pgmap v367: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:39:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:29.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:29 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:29 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:30 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:30 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0002ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:39:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:39:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:39:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:30.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:39:30 compute-1 ceph-mon[80009]: pgmap v368: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:39:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:39:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:31.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:31 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:31 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:32 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d80032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:39:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:32.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:39:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:39:33 compute-1 kernel: SELinux:  Converting 2772 SID table entries...
Nov 24 09:39:33 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 09:39:33 compute-1 kernel: SELinux:  policy capability open_perms=1
Nov 24 09:39:33 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 09:39:33 compute-1 kernel: SELinux:  policy capability always_check_network=0
Nov 24 09:39:33 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 09:39:33 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 09:39:33 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 09:39:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:33.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:33 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0002ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:33 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:34 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:34 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:34 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:34 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:39:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:34.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:34 compute-1 ceph-mon[80009]: pgmap v369: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 170 B/s wr, 1 op/s
Nov 24 09:39:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:35.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:35 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d80032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:35 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0002ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:35 compute-1 ceph-mon[80009]: pgmap v370: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Nov 24 09:39:36 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:36 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:36.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:37.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:37 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:37 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:37 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:39:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:37 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:39:37 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:39:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:38 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0002ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:39:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:38.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:39:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:39.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:39 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:39:39 compute-1 ceph-mon[80009]: pgmap v371: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Nov 24 09:39:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:39 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:39 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:40 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:40 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:40 compute-1 ceph-mon[80009]: pgmap v372: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Nov 24 09:39:40 compute-1 ceph-mon[80009]: pgmap v373: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Nov 24 09:39:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:40.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:41.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:41 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0002ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:41 compute-1 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 24 09:39:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:41 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:41 compute-1 sudo[149449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:39:41 compute-1 sudo[149449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:39:41 compute-1 sudo[149449]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:41 compute-1 podman[149473]: 2025-11-24 09:39:41.74250881 +0000 UTC m=+0.103712026 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 24 09:39:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:42 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:42 compute-1 ceph-mon[80009]: pgmap v374: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:39:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:42.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:42 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:39:42 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:39:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:43.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:43 compute-1 kernel: SELinux:  Converting 2772 SID table entries...
Nov 24 09:39:43 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 09:39:43 compute-1 kernel: SELinux:  policy capability open_perms=1
Nov 24 09:39:43 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 09:39:43 compute-1 kernel: SELinux:  policy capability always_check_network=0
Nov 24 09:39:43 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 09:39:43 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 09:39:43 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 09:39:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:43 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:43 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:44 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:44.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:44 compute-1 ceph-mon[80009]: pgmap v375: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:39:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:39:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:45.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:39:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:39:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:39:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:45 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:45 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:45 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:39:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:46 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:46.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:46 compute-1 ceph-mon[80009]: pgmap v376: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:39:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:39:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:47.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:39:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:47 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:47 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:47 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:39:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:48 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093948 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:39:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:39:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:48.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:39:48 compute-1 ceph-mon[80009]: pgmap v377: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:39:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093949 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:39:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:39:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:49.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:39:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:49 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:49 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:50 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:50 compute-1 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 24 09:39:50 compute-1 podman[149513]: 2025-11-24 09:39:50.343948757 +0000 UTC m=+0.065752629 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:39:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:50.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:51 compute-1 ceph-mon[80009]: pgmap v378: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s
Nov 24 09:39:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:51.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:51 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:51 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:52 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:52 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:39:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:52.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:39:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:39:53 compute-1 ceph-mon[80009]: pgmap v379: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s
Nov 24 09:39:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:53.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:53 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:53 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:54 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:54 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:39:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:54.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:39:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:55.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:55 compute-1 ceph-mon[80009]: pgmap v380: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Nov 24 09:39:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:55 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:55 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:56 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:56 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:56.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:57.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:57 compute-1 ceph-mon[80009]: pgmap v381: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Nov 24 09:39:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:57 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:57 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:58 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:58 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:58 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:39:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:39:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:58.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:39:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:39:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:59.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:59 compute-1 ceph-mon[80009]: pgmap v382: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s
Nov 24 09:39:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:59 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:59 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:00 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:00 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:40:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:40:00 compute-1 ceph-mon[80009]: pgmap v383: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:00 compute-1 ceph-mon[80009]: overall HEALTH_OK
Nov 24 09:40:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:40:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:40:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:00.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:40:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:40:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:01.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:40:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:01 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:01 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:01 compute-1 sudo[153697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:40:01 compute-1 sudo[153697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:40:01 compute-1 sudo[153697]: pam_unix(sudo:session): session closed for user root
Nov 24 09:40:02 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:02 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:02 compute-1 ceph-mon[80009]: pgmap v384: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:40:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:02.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:40:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:40:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:03.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:40:03 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:40:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:03 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:03 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:04 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:04 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:04 compute-1 ceph-mon[80009]: pgmap v385: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:04.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:05.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:05 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:05 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:06 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:06 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:06 compute-1 ceph-mon[80009]: pgmap v386: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:40:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:06.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:40:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:40:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:07.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:40:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:07 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:07 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:08 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:08 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:08 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:40:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:08.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:08 compute-1 ceph-mon[80009]: pgmap v387: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 24 09:40:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:09.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:09 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:09 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:10 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:10 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:40:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:10.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:40:10 compute-1 ceph-mon[80009]: pgmap v388: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:40:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:11.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:40:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:11 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:11 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:12 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:12 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:12 compute-1 podman[160190]: 2025-11-24 09:40:12.358933253 +0000 UTC m=+0.092535794 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 24 09:40:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:12.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:12 compute-1 ceph-mon[80009]: pgmap v389: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:13.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:13 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:40:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:13 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:13 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:14 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:14 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0001bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:14.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:15.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:40:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:40:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:15 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:15 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:16 compute-1 ceph-mon[80009]: pgmap v390: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:40:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:16.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:40:17 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:40:17 compute-1 ceph-mon[80009]: pgmap v391: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:17.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:17 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0001bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:17 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:18 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:18 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:40:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:18.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:40:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:19.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:40:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:19 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:19 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0002ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:19 compute-1 ceph-mon[80009]: pgmap v392: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 24 09:40:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:40:20.038 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:40:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:40:20.038 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:40:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:40:20.038 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:40:20 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:20 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:40:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:20.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:40:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:40:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:21.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:40:21 compute-1 podman[165957]: 2025-11-24 09:40:21.301782425 +0000 UTC m=+0.047114749 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 24 09:40:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:21 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:21 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:21 compute-1 sudo[166332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:40:21 compute-1 sudo[166332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:40:21 compute-1 sudo[166332]: pam_unix(sudo:session): session closed for user root
Nov 24 09:40:22 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:22 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:22.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:23.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:23 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0002ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:23 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:40:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:24 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:24.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:25 compute-1 sudo[166426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:40:25 compute-1 sudo[166426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:40:25 compute-1 sudo[166426]: pam_unix(sudo:session): session closed for user root
Nov 24 09:40:25 compute-1 sudo[166451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 24 09:40:25 compute-1 sudo[166451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:40:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:25.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:25 compute-1 ceph-mon[80009]: pgmap v393: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:25 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:25 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:25 compute-1 podman[166549]: 2025-11-24 09:40:25.791687537 +0000 UTC m=+0.083138017 container exec fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:40:25 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 24 09:40:25 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:40:25 compute-1 podman[166549]: 2025-11-24 09:40:25.92189452 +0000 UTC m=+0.213344990 container exec_died fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:40:26 compute-1 ceph-mon[80009]: pgmap v394: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:26 compute-1 ceph-mon[80009]: pgmap v395: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:26 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:40:26 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:40:26 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:26 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:26 compute-1 podman[166688]: 2025-11-24 09:40:26.376653494 +0000 UTC m=+0.048568305 container exec 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:40:26 compute-1 podman[166688]: 2025-11-24 09:40:26.412768074 +0000 UTC m=+0.084682885 container exec_died 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:40:26 compute-1 podman[166763]: 2025-11-24 09:40:26.658265374 +0000 UTC m=+0.059258345 container exec 72f08d8220aebf8a177859741b959495fc0d990644c83dbaa6c96d6a6ae331e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 24 09:40:26 compute-1 podman[166763]: 2025-11-24 09:40:26.670708128 +0000 UTC m=+0.071701089 container exec_died 72f08d8220aebf8a177859741b959495fc0d990644c83dbaa6c96d6a6ae331e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:40:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:26.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:26 compute-1 podman[166829]: 2025-11-24 09:40:26.950013049 +0000 UTC m=+0.145304855 container exec 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 09:40:27 compute-1 podman[166851]: 2025-11-24 09:40:27.02662491 +0000 UTC m=+0.055668214 container exec_died 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 09:40:27 compute-1 podman[166829]: 2025-11-24 09:40:27.125358119 +0000 UTC m=+0.320649905 container exec_died 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 09:40:27 compute-1 ceph-mon[80009]: pgmap v396: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:27.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:27 compute-1 podman[166898]: 2025-11-24 09:40:27.313921563 +0000 UTC m=+0.050408742 container exec b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, release=1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, architecture=x86_64, com.redhat.component=keepalived-container, name=keepalived, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=)
Nov 24 09:40:27 compute-1 podman[166898]: 2025-11-24 09:40:27.326775197 +0000 UTC m=+0.063262366 container exec_died b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, vendor=Red Hat, Inc., version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.buildah.version=1.28.2, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, release=1793, name=keepalived, vcs-type=git, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 24 09:40:27 compute-1 sudo[166451]: pam_unix(sudo:session): session closed for user root
Nov 24 09:40:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:40:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:40:27 compute-1 sudo[166928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:40:27 compute-1 sudo[166928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:40:27 compute-1 sudo[166928]: pam_unix(sudo:session): session closed for user root
Nov 24 09:40:27 compute-1 sudo[166953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:40:27 compute-1 sudo[166953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:40:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:27 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:27 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:28 compute-1 sudo[166953]: pam_unix(sudo:session): session closed for user root
Nov 24 09:40:28 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 24 09:40:28 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:40:28 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:28 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:28 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:40:28 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:40:28 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:40:28 compute-1 ceph-mon[80009]: pgmap v397: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 24 09:40:28 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:40:28 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:40:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:28.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:28 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:40:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:40:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:29.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:29 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:29 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 24 09:40:29 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:40:29 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:40:29 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:40:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:40:29 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:40:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:40:29 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:40:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:40:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:40:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:40:29 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:40:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:40:29 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:40:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:40:29 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:40:30 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:30 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:40:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:40:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:40:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:30.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:40:30 compute-1 ceph-mon[80009]: pgmap v398: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:40:30 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:40:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:40:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:40:30 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:40:30 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:40:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:40:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:40:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:40:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:40:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:40:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:31.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:40:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:31 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:31 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:32 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:40:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:32.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:40:32 compute-1 ceph-mon[80009]: pgmap v399: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:40:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:33.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:40:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:33 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:33 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:40:34 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:34 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:40:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:40:34 compute-1 sudo[167021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:40:34 compute-1 sudo[167021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:40:34 compute-1 sudo[167021]: pam_unix(sudo:session): session closed for user root
Nov 24 09:40:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:40:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:34.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:40:35 compute-1 ceph-mon[80009]: pgmap v400: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:35 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:40:35 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:40:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:35.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:35 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:35 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:36 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:36 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:36.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:37 compute-1 ceph-mon[80009]: pgmap v401: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:40:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:37.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:40:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:37 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:37 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:38 compute-1 kernel: SELinux:  Converting 2773 SID table entries...
Nov 24 09:40:38 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 09:40:38 compute-1 kernel: SELinux:  policy capability open_perms=1
Nov 24 09:40:38 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 09:40:38 compute-1 kernel: SELinux:  policy capability always_check_network=0
Nov 24 09:40:38 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 09:40:38 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 09:40:38 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 09:40:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:38 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:40:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:38.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:40:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:40:39 compute-1 groupadd[167060]: group added to /etc/group: name=dnsmasq, GID=992
Nov 24 09:40:39 compute-1 groupadd[167060]: group added to /etc/gshadow: name=dnsmasq
Nov 24 09:40:39 compute-1 ceph-mon[80009]: pgmap v402: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 24 09:40:39 compute-1 groupadd[167060]: new group: name=dnsmasq, GID=992
Nov 24 09:40:39 compute-1 useradd[167067]: new user: name=dnsmasq, UID=992, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Nov 24 09:40:39 compute-1 dbus-broker-launch[791]: Noticed file-system modification, trigger reload.
Nov 24 09:40:39 compute-1 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 24 09:40:39 compute-1 dbus-broker-launch[791]: Noticed file-system modification, trigger reload.
Nov 24 09:40:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:39.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:39 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0003940 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:39 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:40 compute-1 groupadd[167081]: group added to /etc/group: name=clevis, GID=991
Nov 24 09:40:40 compute-1 groupadd[167081]: group added to /etc/gshadow: name=clevis
Nov 24 09:40:40 compute-1 groupadd[167081]: new group: name=clevis, GID=991
Nov 24 09:40:40 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:40 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:40 compute-1 useradd[167088]: new user: name=clevis, UID=991, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Nov 24 09:40:40 compute-1 usermod[167098]: add 'clevis' to group 'tss'
Nov 24 09:40:40 compute-1 usermod[167098]: add 'clevis' to shadow group 'tss'
Nov 24 09:40:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:40.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:41 compute-1 ceph-mon[80009]: pgmap v403: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:41.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:41 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:41 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:41 compute-1 sudo[167119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:40:41 compute-1 sudo[167119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:40:41 compute-1 sudo[167119]: pam_unix(sudo:session): session closed for user root
Nov 24 09:40:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:42 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:42 compute-1 polkitd[43353]: Reloading rules
Nov 24 09:40:42 compute-1 polkitd[43353]: Collecting garbage unconditionally...
Nov 24 09:40:42 compute-1 polkitd[43353]: Loading rules from directory /etc/polkit-1/rules.d
Nov 24 09:40:42 compute-1 polkitd[43353]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 24 09:40:42 compute-1 polkitd[43353]: Finished loading, compiling and executing 3 rules
Nov 24 09:40:42 compute-1 polkitd[43353]: Reloading rules
Nov 24 09:40:42 compute-1 polkitd[43353]: Collecting garbage unconditionally...
Nov 24 09:40:42 compute-1 polkitd[43353]: Loading rules from directory /etc/polkit-1/rules.d
Nov 24 09:40:42 compute-1 polkitd[43353]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 24 09:40:42 compute-1 polkitd[43353]: Finished loading, compiling and executing 3 rules
Nov 24 09:40:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:42 compute-1 podman[167149]: 2025-11-24 09:40:42.84565527 +0000 UTC m=+0.213938221 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 24 09:40:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:42.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:43 compute-1 ceph-mon[80009]: pgmap v404: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.002000047s ======
Nov 24 09:40:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:43.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Nov 24 09:40:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:43 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:43 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:44 compute-1 groupadd[167338]: group added to /etc/group: name=ceph, GID=167
Nov 24 09:40:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:40:44 compute-1 groupadd[167338]: group added to /etc/gshadow: name=ceph
Nov 24 09:40:44 compute-1 groupadd[167338]: new group: name=ceph, GID=167
Nov 24 09:40:44 compute-1 useradd[167344]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Nov 24 09:40:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:44 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:44.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:40:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:45.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:40:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:40:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:40:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:45 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:45 compute-1 ceph-mon[80009]: pgmap v405: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:45 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:46 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:40:46 compute-1 ceph-mon[80009]: pgmap v406: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:46.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:46 compute-1 systemd[1]: Stopping OpenSSH server daemon...
Nov 24 09:40:46 compute-1 sshd[1006]: Received signal 15; terminating.
Nov 24 09:40:46 compute-1 systemd[1]: sshd.service: Deactivated successfully.
Nov 24 09:40:46 compute-1 systemd[1]: Stopped OpenSSH server daemon.
Nov 24 09:40:46 compute-1 systemd[1]: sshd.service: Consumed 2.103s CPU time, read 32.0K from disk, written 0B to disk.
Nov 24 09:40:46 compute-1 systemd[1]: Stopped target sshd-keygen.target.
Nov 24 09:40:46 compute-1 systemd[1]: Stopping sshd-keygen.target...
Nov 24 09:40:46 compute-1 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 09:40:46 compute-1 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 09:40:46 compute-1 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 09:40:46 compute-1 systemd[1]: Reached target sshd-keygen.target.
Nov 24 09:40:46 compute-1 systemd[1]: Starting OpenSSH server daemon...
Nov 24 09:40:46 compute-1 sshd[167991]: Server listening on 0.0.0.0 port 22.
Nov 24 09:40:46 compute-1 sshd[167991]: Server listening on :: port 22.
Nov 24 09:40:46 compute-1 systemd[1]: Started OpenSSH server daemon.
Nov 24 09:40:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:40:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:47.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:40:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:47 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:47 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:48 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:48 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 09:40:48 compute-1 systemd[1]: Starting man-db-cache-update.service...
Nov 24 09:40:48 compute-1 ceph-mon[80009]: pgmap v407: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:40:48 compute-1 systemd[1]: Reloading.
Nov 24 09:40:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:48.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:48 compute-1 systemd-rc-local-generator[168246]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:40:48 compute-1 systemd-sysv-generator[168250]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:40:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:40:49 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 09:40:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:49.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:49 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:49 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:50 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:50 compute-1 ceph-mon[80009]: pgmap v408: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:40:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:40:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:50.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:40:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094051 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:40:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:51.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:51 compute-1 sudo[149044]: pam_unix(sudo:session): session closed for user root
Nov 24 09:40:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:51 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:51 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:52 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:52 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:52 compute-1 podman[172376]: 2025-11-24 09:40:52.318734814 +0000 UTC m=+0.056734971 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 24 09:40:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:40:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:52.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:40:52 compute-1 ceph-mon[80009]: pgmap v409: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:40:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:40:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:53.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:40:53 compute-1 sshd-session[173527]: Connection closed by 159.65.46.209 port 54844
Nov 24 09:40:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:53 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:53 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:40:54 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:54 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e40014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:40:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:54.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:40:54 compute-1 ceph-mon[80009]: pgmap v410: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:40:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:55.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:55 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:55 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:56 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:56 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:56.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:57 compute-1 ceph-mon[80009]: pgmap v411: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:40:57 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 09:40:57 compute-1 systemd[1]: Finished man-db-cache-update.service.
Nov 24 09:40:57 compute-1 systemd[1]: man-db-cache-update.service: Consumed 10.367s CPU time.
Nov 24 09:40:57 compute-1 systemd[1]: run-r73e53c4d84894e42b340f1be3eb04c6a.service: Deactivated successfully.
Nov 24 09:40:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:40:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:57.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:40:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:57 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e40014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:57 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:58 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:58 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:40:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:58.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:40:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:40:59 compute-1 ceph-mon[80009]: pgmap v412: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:40:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:40:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:59.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:59 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:59 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e40014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:00 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:00 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:41:00 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:00 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:41:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:41:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:00.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:01 compute-1 ceph-mon[80009]: pgmap v413: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:41:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:41:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:01.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:01 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:01 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:01 compute-1 sudo[176681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:41:01 compute-1 sudo[176681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:41:01 compute-1 sudo[176681]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:02 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:02 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4001670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:02.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:03 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:41:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:03 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:41:03 compute-1 ceph-mon[80009]: pgmap v414: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Nov 24 09:41:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:03.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:03 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:03 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:41:04 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:04 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:04.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:05 compute-1 ceph-mon[80009]: pgmap v415: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Nov 24 09:41:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:05.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:05 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4003710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:05 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:06 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:06 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:41:06 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:06 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:06.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:07.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:07 compute-1 ceph-mon[80009]: pgmap v416: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Nov 24 09:41:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:07 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:07 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4003710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:08 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:08 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:08 compute-1 ceph-mon[80009]: pgmap v417: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1022 B/s wr, 3 op/s
Nov 24 09:41:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:08.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:41:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:09.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:09 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e00043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:09 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:10 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:10 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004030 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:10 compute-1 ceph-mon[80009]: pgmap v418: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1022 B/s wr, 3 op/s
Nov 24 09:41:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:41:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:10.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:41:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094111 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:41:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:41:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:11.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:41:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:11 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:11 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e00043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:12 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:12 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:12 compute-1 ceph-mon[80009]: pgmap v419: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:41:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:41:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:12.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:41:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:13.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:13 compute-1 podman[176711]: 2025-11-24 09:41:13.360180099 +0000 UTC m=+0.101554041 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 24 09:41:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:13 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004030 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:13 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:41:14 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:14 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e00043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:14 compute-1 ceph-mon[80009]: pgmap v420: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:41:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:14.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:15.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:41:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:41:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:15 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e00043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:15 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004030 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:41:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:16 compute-1 ceph-mon[80009]: pgmap v421: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:41:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:16.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:41:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:17.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:41:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:17 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:17 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e00043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:18 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc002220 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:18 compute-1 ceph-mon[80009]: pgmap v422: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:41:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:18.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:41:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:19.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:19 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:19 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:41:20.039 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:41:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:41:20.039 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:41:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:41:20.039 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:41:20 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:20 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:20.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:21 compute-1 ceph-mon[80009]: pgmap v423: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:41:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:21.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:21 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc002220 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:21 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:22 compute-1 sudo[176743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:41:22 compute-1 sudo[176743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:41:22 compute-1 sudo[176743]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:22 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:22 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e00043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:22.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:23 compute-1 ceph-mon[80009]: pgmap v424: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:41:23 compute-1 podman[176768]: 2025-11-24 09:41:23.306514124 +0000 UTC m=+0.049372111 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 24 09:41:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:23.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:23 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:23 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc002220 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:41:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:24 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:24.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:25 compute-1 ceph-mon[80009]: pgmap v425: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:41:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:25.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:25 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e00043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:25 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:26 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:26 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:41:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:26.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:41:27 compute-1 ceph-mon[80009]: pgmap v426: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:41:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:27.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:27 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:27 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e00043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:28 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:28 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:28.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:41:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:29.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:29 compute-1 ceph-mon[80009]: pgmap v427: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:41:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:29 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:29 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003da0 fd 38 proxy ignored for local
Nov 24 09:41:29 compute-1 kernel: ganesha.nfsd[149236]: segfault at 50 ip 00007f91affde32e sp 00007f916cff8210 error 4 in libntirpc.so.5.8[7f91affc3000+2c000] likely on CPU 6 (core 0, socket 6)
Nov 24 09:41:29 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:41:29 compute-1 systemd[1]: Started Process Core Dump (PID 176791/UID 0).
Nov 24 09:41:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:41:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:41:30 compute-1 ceph-mon[80009]: pgmap v428: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:41:30 compute-1 systemd-coredump[176792]: Process 148437 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 53:
                                                    #0  0x00007f91affde32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 24 09:41:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:30.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:30 compute-1 systemd[1]: systemd-coredump@5-176791-0.service: Deactivated successfully.
Nov 24 09:41:30 compute-1 systemd[1]: systemd-coredump@5-176791-0.service: Consumed 1.151s CPU time.
Nov 24 09:41:30 compute-1 podman[176850]: 2025-11-24 09:41:30.985813757 +0000 UTC m=+0.024365173 container died 72f08d8220aebf8a177859741b959495fc0d990644c83dbaa6c96d6a6ae331e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:41:31 compute-1 systemd[1]: var-lib-containers-storage-overlay-ca848b0992b5514535531e6a96c6a662d50b4b69145fd2e2f67a3d9272fea15d-merged.mount: Deactivated successfully.
Nov 24 09:41:31 compute-1 podman[176850]: 2025-11-24 09:41:31.029864728 +0000 UTC m=+0.068416124 container remove 72f08d8220aebf8a177859741b959495fc0d990644c83dbaa6c96d6a6ae331e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid)
Nov 24 09:41:31 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:41:31 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 09:41:31 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.622s CPU time.
Nov 24 09:41:31 compute-1 sudo[176966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsjyslzfshwpcsgzjtzmyohkaneknvcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977290.7015622-969-105654588928954/AnsiballZ_systemd.py'
Nov 24 09:41:31 compute-1 sudo[176966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:31.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:41:31 compute-1 python3.9[176968]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 09:41:31 compute-1 systemd[1]: Reloading.
Nov 24 09:41:31 compute-1 systemd-rc-local-generator[177001]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:41:31 compute-1 systemd-sysv-generator[177004]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:41:31 compute-1 sudo[176966]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:32 compute-1 sudo[177157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnrirchzxosufgojnzsvarvmzvfanvcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977292.0585134-969-108139180489065/AnsiballZ_systemd.py'
Nov 24 09:41:32 compute-1 sudo[177157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:32 compute-1 ceph-mon[80009]: pgmap v429: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:41:32 compute-1 python3.9[177159]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 09:41:32 compute-1 systemd[1]: Reloading.
Nov 24 09:41:32 compute-1 systemd-rc-local-generator[177188]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:41:32 compute-1 systemd-sysv-generator[177191]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:41:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:41:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:32.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:41:32 compute-1 sudo[177157]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:33 compute-1 sudo[177346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbqrqruhinavzysgcmwmqmpbqtrjaerb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977293.0954018-969-56587498920682/AnsiballZ_systemd.py'
Nov 24 09:41:33 compute-1 sudo[177346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:33.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:33 compute-1 python3.9[177348]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 09:41:33 compute-1 systemd[1]: Reloading.
Nov 24 09:41:33 compute-1 systemd-sysv-generator[177380]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:41:33 compute-1 systemd-rc-local-generator[177377]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:41:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:41:34 compute-1 sudo[177346]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:34 compute-1 sudo[177537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcimcbsbkwhjdjdwcmmwevpctkpgdpww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977294.168399-969-41230436622194/AnsiballZ_systemd.py'
Nov 24 09:41:34 compute-1 sudo[177537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:34 compute-1 ceph-mon[80009]: pgmap v430: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:41:34 compute-1 python3.9[177539]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 09:41:34 compute-1 sudo[177540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:41:34 compute-1 sudo[177540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:41:34 compute-1 sudo[177540]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:34 compute-1 systemd[1]: Reloading.
Nov 24 09:41:34 compute-1 systemd-rc-local-generator[177617]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:41:34 compute-1 systemd-sysv-generator[177621]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:41:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:34.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:35 compute-1 sudo[177567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:41:35 compute-1 sudo[177567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:41:35 compute-1 sudo[177537]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:35.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:35 compute-1 sudo[177567]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094135 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:41:35 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:41:35 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:41:35 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:41:35 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:41:35 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:41:35 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:41:35 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:41:35 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:41:35 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:41:35 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:41:35 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:41:35 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:41:36 compute-1 ceph-mon[80009]: pgmap v431: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:41:36 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:41:36 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:41:36 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:41:36 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:41:36 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:41:36 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:41:36 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:41:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:36.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:37.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:38 compute-1 sudo[177811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvpskzgdviasqlppecvsfzjjkcxgxguk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977297.9277196-1056-41015809466859/AnsiballZ_systemd.py'
Nov 24 09:41:38 compute-1 sudo[177811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:38 compute-1 python3.9[177813]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:38 compute-1 systemd[1]: Reloading.
Nov 24 09:41:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094138 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:41:38 compute-1 systemd-rc-local-generator[177843]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:41:38 compute-1 systemd-sysv-generator[177847]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:41:38 compute-1 ceph-mon[80009]: pgmap v432: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:41:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:38.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:38 compute-1 sudo[177811]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:41:39 compute-1 sudo[178001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqkynkrblhaocaumdyylqbnvkorvoevc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977299.0537107-1056-258009037414221/AnsiballZ_systemd.py'
Nov 24 09:41:39 compute-1 sudo[178001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:39.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:39 compute-1 python3.9[178003]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:39 compute-1 systemd[1]: Reloading.
Nov 24 09:41:39 compute-1 systemd-rc-local-generator[178033]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:41:39 compute-1 systemd-sysv-generator[178036]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:41:39 compute-1 sudo[178001]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:40 compute-1 sudo[178191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iullwxfanuoflgagyyczedjkumbsyxdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977300.1204312-1056-56223343064350/AnsiballZ_systemd.py'
Nov 24 09:41:40 compute-1 sudo[178191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:40 compute-1 python3.9[178193]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:40 compute-1 ceph-mon[80009]: pgmap v433: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 24 09:41:40 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:41:40 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:41:40 compute-1 systemd[1]: Reloading.
Nov 24 09:41:40 compute-1 systemd-rc-local-generator[178225]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:41:40 compute-1 systemd-sysv-generator[178230]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:41:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:40.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:41 compute-1 sudo[178227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:41:41 compute-1 sudo[178227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:41:41 compute-1 sudo[178227]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:41 compute-1 sudo[178191]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:41 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 6.
Nov 24 09:41:41 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:41:41 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.622s CPU time.
Nov 24 09:41:41 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:41:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:41.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:41 compute-1 sudo[178461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chhqgtvwylysvjbcnjrpiitgumodshkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977301.1808524-1056-131193897761816/AnsiballZ_systemd.py'
Nov 24 09:41:41 compute-1 sudo[178461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:41 compute-1 podman[178431]: 2025-11-24 09:41:41.447186094 +0000 UTC m=+0.039058379 container create e3c4baedaffcfce23f2147ef6f93604f265d93ee34d2b6a77a0ede860308372e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 09:41:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d26831df6c5a0f65cfaba17b14bc54bef5a769ed56fa20fd9c93db15c5a4a386/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:41:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d26831df6c5a0f65cfaba17b14bc54bef5a769ed56fa20fd9c93db15c5a4a386/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:41:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d26831df6c5a0f65cfaba17b14bc54bef5a769ed56fa20fd9c93db15c5a4a386/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:41:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d26831df6c5a0f65cfaba17b14bc54bef5a769ed56fa20fd9c93db15c5a4a386/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:41:41 compute-1 podman[178431]: 2025-11-24 09:41:41.51761534 +0000 UTC m=+0.109487645 container init e3c4baedaffcfce23f2147ef6f93604f265d93ee34d2b6a77a0ede860308372e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 24 09:41:41 compute-1 podman[178431]: 2025-11-24 09:41:41.522676925 +0000 UTC m=+0.114549210 container start e3c4baedaffcfce23f2147ef6f93604f265d93ee34d2b6a77a0ede860308372e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:41:41 compute-1 podman[178431]: 2025-11-24 09:41:41.428867135 +0000 UTC m=+0.020739430 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:41:41 compute-1 bash[178431]: e3c4baedaffcfce23f2147ef6f93604f265d93ee34d2b6a77a0ede860308372e
Nov 24 09:41:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:41 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:41:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:41 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:41:41 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:41:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:41 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:41:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:41 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:41:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:41 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:41:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:41 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:41:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:41 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:41:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:41 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:41:41 compute-1 python3.9[178463]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:41 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:41:41 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:41:41 compute-1 sudo[178461]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:42 compute-1 sudo[178681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjorjyamtxrdnbrayeojtvelltftwavj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977301.9405222-1056-226992564200875/AnsiballZ_systemd.py'
Nov 24 09:41:42 compute-1 sudo[178681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:42 compute-1 sudo[178643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:41:42 compute-1 sudo[178643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:41:42 compute-1 sudo[178643]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:42 compute-1 python3.9[178686]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:42 compute-1 systemd[1]: Reloading.
Nov 24 09:41:42 compute-1 systemd-rc-local-generator[178716]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:41:42 compute-1 systemd-sysv-generator[178720]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:41:42 compute-1 ceph-mon[80009]: pgmap v434: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:41:42 compute-1 sudo[178681]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:42.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:43.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:41:44 compute-1 podman[178804]: 2025-11-24 09:41:44.355196715 +0000 UTC m=+0.085173239 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Nov 24 09:41:44 compute-1 sudo[178903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmhutekonkfccdnhkgvzvgvxodbkiwbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977304.1721256-1164-12066017327730/AnsiballZ_systemd.py'
Nov 24 09:41:44 compute-1 sudo[178903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:44 compute-1 ceph-mon[80009]: pgmap v435: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 24 09:41:44 compute-1 python3.9[178905]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 09:41:44 compute-1 systemd[1]: Reloading.
Nov 24 09:41:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:41:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:44.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:41:44 compute-1 systemd-rc-local-generator[178935]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:41:44 compute-1 systemd-sysv-generator[178938]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:41:45 compute-1 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 24 09:41:45 compute-1 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 24 09:41:45 compute-1 sudo[178903]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:45.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:41:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:41:45 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:41:45 compute-1 sudo[179096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxkcfcfnpipgqqlnzsrefefkwbwiyvea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977305.5750697-1188-177332758341534/AnsiballZ_systemd.py'
Nov 24 09:41:45 compute-1 sudo[179096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:46 compute-1 python3.9[179098]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:46 compute-1 sudo[179096]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:46 compute-1 sudo[179252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuwlzgpheagsjowvpwhkvycciwjoiuir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977306.315947-1188-5168657988358/AnsiballZ_systemd.py'
Nov 24 09:41:46 compute-1 sudo[179252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:46 compute-1 ceph-mon[80009]: pgmap v436: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 24 09:41:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:46.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:46 compute-1 python3.9[179254]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:47 compute-1 sudo[179252]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:47.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:47 compute-1 sudo[179407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vekstambhkkwztoktebluiibqyybmjee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977307.148721-1188-129449033913315/AnsiballZ_systemd.py'
Nov 24 09:41:47 compute-1 sudo[179407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:47 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:41:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:47 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:41:47 compute-1 python3.9[179409]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:47 compute-1 sudo[179407]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:48 compute-1 sudo[179563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avulcegriespteossrocdugdcjprwzxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977308.0880024-1188-222930398642240/AnsiballZ_systemd.py'
Nov 24 09:41:48 compute-1 sudo[179563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:48 compute-1 python3.9[179565]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:48 compute-1 sudo[179563]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:48 compute-1 ceph-mon[80009]: pgmap v437: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Nov 24 09:41:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:41:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:48.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:41:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:41:49 compute-1 sudo[179718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gicgqbyrxzwcrlzomhavkssfwcsdzjos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977308.8670626-1188-25657415351854/AnsiballZ_systemd.py'
Nov 24 09:41:49 compute-1 sudo[179718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:41:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:49.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:41:49 compute-1 python3.9[179720]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:49 compute-1 sudo[179718]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:49 compute-1 sudo[179873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kktpzwdhyiwbakpjctrrpssdzenlfmsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977309.6049604-1188-79733175124133/AnsiballZ_systemd.py'
Nov 24 09:41:49 compute-1 sudo[179873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:50 compute-1 python3.9[179875]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:50 compute-1 sudo[179873]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:50 compute-1 sudo[180029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvpzkiqgbfsnmhuzxkzvihwzwifsjbzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977310.3773558-1188-6365753839495/AnsiballZ_systemd.py'
Nov 24 09:41:50 compute-1 sudo[180029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:50 compute-1 ceph-mon[80009]: pgmap v438: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Nov 24 09:41:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:50.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:50 compute-1 python3.9[180031]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:50 compute-1 sudo[180029]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:51 compute-1 sudo[180184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snqefyittbtyhqjwakrprutkgiddzzgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977311.0936015-1188-162061210872823/AnsiballZ_systemd.py'
Nov 24 09:41:51 compute-1 sudo[180184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:51.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:51 compute-1 python3.9[180186]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:51 compute-1 sudo[180184]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:52 compute-1 sudo[180340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzzovtqiiatapslgbqfnobzfviritndi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977311.8637972-1188-274846882355617/AnsiballZ_systemd.py'
Nov 24 09:41:52 compute-1 sudo[180340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:52 compute-1 python3.9[180342]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:52 compute-1 sudo[180340]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:52 compute-1 ceph-mon[80009]: pgmap v439: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:41:52 compute-1 sudo[180495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xczgjtzekxdgjhhscpzdkkodziapwtfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977312.600093-1188-258572418656676/AnsiballZ_systemd.py'
Nov 24 09:41:52 compute-1 sudo[180495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:41:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:52.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:41:53 compute-1 python3.9[180497]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:53 compute-1 sudo[180495]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:53.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:41:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:53 compute-1 sudo[180676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkxivikkvzexpidshgxkhdvaeugkwjxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977313.4285924-1188-273134956122394/AnsiballZ_systemd.py'
Nov 24 09:41:53 compute-1 sudo[180676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:53 compute-1 podman[180636]: 2025-11-24 09:41:53.814684659 +0000 UTC m=+0.098476515 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 24 09:41:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:41:54 compute-1 python3.9[180683]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:54 compute-1 sudo[180676]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:54 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:54 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efff8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:54 compute-1 sudo[180841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlfydpkkfhegmcvgfvqvuvwxwyljwdpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977314.301389-1188-93106582662488/AnsiballZ_systemd.py'
Nov 24 09:41:54 compute-1 sudo[180841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:54 compute-1 python3.9[180843]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:54 compute-1 ceph-mon[80009]: pgmap v440: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:41:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:54.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:54 compute-1 sudo[180841]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:55 compute-1 sudo[180996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-babeflgdsjpudrxjuobgykbyblrhznse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977315.0751913-1188-12888684469069/AnsiballZ_systemd.py'
Nov 24 09:41:55 compute-1 sudo[180996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:55.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:55 compute-1 python3.9[180998]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:55 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094155 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:41:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:55 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:55 compute-1 sudo[180996]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:56 compute-1 sudo[181152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lertppzwgshrfwbpqehewjuvukldtdii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977315.82843-1188-200930119679423/AnsiballZ_systemd.py'
Nov 24 09:41:56 compute-1 sudo[181152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:56 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:56 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:56 compute-1 python3.9[181154]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:56 compute-1 sudo[181152]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:56 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:56 : epoch 69242855 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:41:56 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:56 : epoch 69242855 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:41:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:56.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:57 compute-1 ceph-mon[80009]: pgmap v441: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:41:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:57.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:57 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:57 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd80016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:58 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:58 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe0001140 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:41:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:58.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:41:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:41:59 compute-1 ceph-mon[80009]: pgmap v442: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.5 KiB/s wr, 4 op/s
Nov 24 09:41:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:59 : epoch 69242855 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:41:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:41:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:59.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:59 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:59 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:00 compute-1 sudo[181309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auuodvkvbukjdmobrdocawgnrsjnwtrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977319.9044406-1494-129948633887957/AnsiballZ_file.py'
Nov 24 09:42:00 compute-1 sudo[181309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:00 compute-1 python3.9[181311]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:42:00 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:00 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd80016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:00 compute-1 sudo[181309]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:42:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:42:00 compute-1 sudo[181461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pburfgwkoqtcuiwrxdhxzmjjvbjgxqig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977320.4641838-1494-197413894896745/AnsiballZ_file.py'
Nov 24 09:42:00 compute-1 sudo[181461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:00 compute-1 python3.9[181463]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:42:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:00.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:00 compute-1 sudo[181461]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:01 compute-1 ceph-mon[80009]: pgmap v443: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:42:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:42:01 compute-1 sudo[181613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnfsrjrkxqpnuqgstmvqsrdicxfnjleo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977321.0674932-1494-137713940968962/AnsiballZ_file.py'
Nov 24 09:42:01 compute-1 sudo[181613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:01.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:01 compute-1 python3.9[181615]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:42:01 compute-1 sudo[181613]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:01 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe0001c60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:01 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:01 compute-1 sudo[181766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivhgblesbpetidqyzowgnwbjtnthpqzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977321.6676497-1494-66037607895587/AnsiballZ_file.py'
Nov 24 09:42:01 compute-1 sudo[181766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:02 compute-1 python3.9[181768]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:42:02 compute-1 sudo[181766]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:02 compute-1 sudo[181811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:42:02 compute-1 sudo[181811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:42:02 compute-1 sudo[181811]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:02 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:02 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efff8002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:02 compute-1 sudo[181943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yurkxwkxpmwzexzvlipninljucjwymsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977322.242099-1494-36331653154752/AnsiballZ_file.py'
Nov 24 09:42:02 compute-1 sudo[181943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:02 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094202 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:42:02 compute-1 python3.9[181945]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:42:02 compute-1 sudo[181943]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:02.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:03 compute-1 ceph-mon[80009]: pgmap v444: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Nov 24 09:42:03 compute-1 sudo[182095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pafjrwmunggpvlnwvtfqvvfhuvldglwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977322.840925-1494-11518881918124/AnsiballZ_file.py'
Nov 24 09:42:03 compute-1 sudo[182095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:03 compute-1 python3.9[182097]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:42:03 compute-1 sudo[182095]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:03.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:03 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd80016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:03 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe0001c60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:42:04 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:04 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004009990 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:04 compute-1 sudo[182248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzwvonaqyqglnlhhlthfjpnwqkeaxreb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977324.2128937-1623-48127278510338/AnsiballZ_stat.py'
Nov 24 09:42:04 compute-1 sudo[182248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:04 compute-1 python3.9[182250]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:04 compute-1 sudo[182248]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:04.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:05 compute-1 ceph-mon[80009]: pgmap v445: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:42:05 compute-1 sudo[182373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnznxozrbkhojfkbvpxlsllbdlxskbjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977324.2128937-1623-48127278510338/AnsiballZ_copy.py'
Nov 24 09:42:05 compute-1 sudo[182373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:05.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:05 compute-1 python3.9[182375]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763977324.2128937-1623-48127278510338/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:05 compute-1 sudo[182373]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:05 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efff8002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:05 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:05 compute-1 sudo[182525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtriesvyxnecjfxejbciuqnmkuqkfhuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977325.6198177-1623-81990931702662/AnsiballZ_stat.py'
Nov 24 09:42:05 compute-1 sudo[182525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:06 compute-1 python3.9[182528]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:06 compute-1 sudo[182525]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:06 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:06 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:06 compute-1 sudo[182651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvptatrlicfbtjeropulicdaeqbviwpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977325.6198177-1623-81990931702662/AnsiballZ_copy.py'
Nov 24 09:42:06 compute-1 sudo[182651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:06 compute-1 python3.9[182653]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763977325.6198177-1623-81990931702662/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:06 compute-1 sudo[182651]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:06.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:07 compute-1 sudo[182803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stpzmcgsxcqfkxyowqlabjtslbwrjovt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977326.7134392-1623-157128733567949/AnsiballZ_stat.py'
Nov 24 09:42:07 compute-1 sudo[182803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:07 compute-1 ceph-mon[80009]: pgmap v446: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:42:07 compute-1 python3.9[182805]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:07 compute-1 sudo[182803]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:07.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:07 compute-1 sudo[182928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-runkjlnsvukevekxsrepchflxuyhdvlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977326.7134392-1623-157128733567949/AnsiballZ_copy.py'
Nov 24 09:42:07 compute-1 sudo[182928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:07 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd4000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:07 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe0001c60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:07 compute-1 python3.9[182930]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763977326.7134392-1623-157128733567949/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:07 compute-1 sudo[182928]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:08 compute-1 sudo[183081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duaabelslvbajrbvfkuqnphskwvlttio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977327.9232705-1623-82231872348132/AnsiballZ_stat.py'
Nov 24 09:42:08 compute-1 sudo[183081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:08 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:08 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004009b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:08 compute-1 python3.9[183083]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:08 compute-1 sudo[183081]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:08.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:08 compute-1 sudo[183206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izauwmvdkxcacdlxbyybukljfgfoerns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977327.9232705-1623-82231872348132/AnsiballZ_copy.py'
Nov 24 09:42:08 compute-1 sudo[183206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:42:09 compute-1 ceph-mon[80009]: pgmap v447: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:42:09 compute-1 python3.9[183208]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763977327.9232705-1623-82231872348132/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:09 compute-1 sudo[183206]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:09.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:09 compute-1 sudo[183358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eotjjdkczwpunpwwbutriobcpkrcxvxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977329.353372-1623-206547304598474/AnsiballZ_stat.py'
Nov 24 09:42:09 compute-1 sudo[183358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:09 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:09 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd40016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:09 compute-1 python3.9[183360]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:09 compute-1 sudo[183358]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:10 compute-1 sudo[183484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zygyufkqqknbrsuivygbgyclhidanatx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977329.353372-1623-206547304598474/AnsiballZ_copy.py'
Nov 24 09:42:10 compute-1 sudo[183484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:10 compute-1 python3.9[183486]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763977329.353372-1623-206547304598474/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:10 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:10 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe00030f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:10 compute-1 sudo[183484]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:10 compute-1 sudo[183636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukhifwiypnoqirnczwpiearkdfafnhmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977330.4801853-1623-236033184449756/AnsiballZ_stat.py'
Nov 24 09:42:10 compute-1 sudo[183636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:10.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:11 compute-1 python3.9[183638]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:11 compute-1 sudo[183636]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:11 compute-1 ceph-mon[80009]: pgmap v448: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Nov 24 09:42:11 compute-1 sudo[183761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sivulvtoqsfuekwqzkuxaukdgoficxyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977330.4801853-1623-236033184449756/AnsiballZ_copy.py'
Nov 24 09:42:11 compute-1 sudo[183761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:11.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:11 compute-1 python3.9[183763]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763977330.4801853-1623-236033184449756/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:11 compute-1 sudo[183761]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:11 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004009c90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:11 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:11 compute-1 sudo[183914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoyosjzmhixxvccqjhzpqxebmcqaxogw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977331.6875618-1623-179535270591674/AnsiballZ_stat.py'
Nov 24 09:42:11 compute-1 sudo[183914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:12 compute-1 python3.9[183916]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:12 compute-1 sudo[183914]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:12 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:12 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd40016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:12 compute-1 sudo[184037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdskbjarzttukyvfwybghqybgghiqumq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977331.6875618-1623-179535270591674/AnsiballZ_copy.py'
Nov 24 09:42:12 compute-1 sudo[184037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:12 compute-1 python3.9[184039]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763977331.6875618-1623-179535270591674/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:12 compute-1 sudo[184037]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:42:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:12.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:42:13 compute-1 sudo[184189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjjzgubbfwvsyqicegqsaodncxgwppaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977332.8295412-1623-269588633086473/AnsiballZ_stat.py'
Nov 24 09:42:13 compute-1 sudo[184189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:13 compute-1 ceph-mon[80009]: pgmap v449: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Nov 24 09:42:13 compute-1 python3.9[184191]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:13 compute-1 sudo[184189]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:13.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:13 compute-1 sudo[184314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpifzkwjxefnutaloyrkgzywkdhhfnre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977332.8295412-1623-269588633086473/AnsiballZ_copy.py'
Nov 24 09:42:13 compute-1 sudo[184314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:13 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe00030f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:13 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004009c90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:13 compute-1 python3.9[184316]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763977332.8295412-1623-269588633086473/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:13 compute-1 sudo[184314]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:42:14 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:14 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:14.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:15 compute-1 ceph-mon[80009]: pgmap v450: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:42:15 compute-1 podman[184342]: 2025-11-24 09:42:15.340247559 +0000 UTC m=+0.074481597 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:42:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:42:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:42:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:15.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:15 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:15 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe00030f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:42:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:16 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004009c90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:16 compute-1 sudo[184494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpbksawfglukcpvwdlqhfooxxbvzlmsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977336.1946115-1962-71164620160301/AnsiballZ_command.py'
Nov 24 09:42:16 compute-1 sudo[184494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:16 compute-1 python3.9[184496]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 24 09:42:16 compute-1 sudo[184494]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:16.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:17 compute-1 ceph-mon[80009]: pgmap v451: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:42:17 compute-1 sudo[184647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mctzqfierytwacnviltaphbbyjevjcrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977337.0612059-1989-40408229201179/AnsiballZ_file.py'
Nov 24 09:42:17 compute-1 sudo[184647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:17.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:17 compute-1 python3.9[184649]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:17 compute-1 sudo[184647]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:17 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004009c90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:17 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd4002720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:17 compute-1 sudo[184799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlcpuojlxmzrqpvynzmcrrjtlrpahoxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977337.634668-1989-158419933636885/AnsiballZ_file.py'
Nov 24 09:42:17 compute-1 sudo[184799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:18 compute-1 python3.9[184802]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:18 compute-1 sudo[184799]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:18 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe00041f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:18 compute-1 ceph-mon[80009]: pgmap v452: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:42:18 compute-1 sudo[184952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhftrkfinnneuhpoltirdhmikvgdjxlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977338.2330701-1989-128534464432656/AnsiballZ_file.py'
Nov 24 09:42:18 compute-1 sudo[184952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:18 compute-1 python3.9[184954]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:18 compute-1 sudo[184952]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:18.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:42:19 compute-1 sudo[185104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiqdyarzyfnntgfjddhgevrfnvcityim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977338.8564725-1989-212213549950977/AnsiballZ_file.py'
Nov 24 09:42:19 compute-1 sudo[185104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:42:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:19.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:42:19 compute-1 python3.9[185106]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:19 compute-1 sudo[185104]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:19 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:19 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004009c90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:19 compute-1 sudo[185257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqeybluklyqfevzfljvsnocxbjerwazt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977339.6263204-1989-55309679189805/AnsiballZ_file.py'
Nov 24 09:42:19 compute-1 sudo[185257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:42:20.041 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:42:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:42:20.041 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:42:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:42:20.041 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:42:20 compute-1 python3.9[185259]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:20 compute-1 sudo[185257]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:20 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:20 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd4002720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:20 compute-1 sudo[185409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmxczyqrmujgkpljmvlxabspzanomklt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977340.257922-1989-128114981091901/AnsiballZ_file.py'
Nov 24 09:42:20 compute-1 sudo[185409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:20 compute-1 ceph-mon[80009]: pgmap v453: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:42:20 compute-1 python3.9[185411]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:20 compute-1 sudo[185409]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:20.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:21 compute-1 sudo[185561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vapomzljhayebhsnvycafunwbudbbyjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977340.8434677-1989-188120558841353/AnsiballZ_file.py'
Nov 24 09:42:21 compute-1 sudo[185561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:21 compute-1 python3.9[185563]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:21 compute-1 sudo[185561]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:21.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:21 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe00041f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:21 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003c10 fd 37 proxy ignored for local
Nov 24 09:42:21 compute-1 kernel: ganesha.nfsd[180639]: segfault at 50 ip 00007f00ad40532e sp 00007f00657f9210 error 4 in libntirpc.so.5.8[7f00ad3ea000+2c000] likely on CPU 7 (core 0, socket 7)
Nov 24 09:42:21 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:42:21 compute-1 sudo[185713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dycvnukucsqrcapgpynoqyntgdsxjtdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977341.4429889-1989-120190048940119/AnsiballZ_file.py'
Nov 24 09:42:21 compute-1 sudo[185713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:21 compute-1 systemd[1]: Started Process Core Dump (PID 185715/UID 0).
Nov 24 09:42:21 compute-1 python3.9[185716]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:21 compute-1 sudo[185713]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:22 compute-1 sudo[185882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myfwyfyujcfofucftulrtomorizovyrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977342.0874188-1989-183684913897343/AnsiballZ_file.py'
Nov 24 09:42:22 compute-1 sudo[185882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:22 compute-1 sudo[185857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:42:22 compute-1 sudo[185857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:42:22 compute-1 sudo[185857]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:22 compute-1 python3.9[185893]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:22 compute-1 sudo[185882]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:22 compute-1 ceph-mon[80009]: pgmap v454: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 24 09:42:22 compute-1 systemd-coredump[185717]: Process 178470 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 54:
                                                    #0  0x00007f00ad40532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 24 09:42:22 compute-1 systemd[1]: systemd-coredump@6-185715-0.service: Deactivated successfully.
Nov 24 09:42:22 compute-1 systemd[1]: systemd-coredump@6-185715-0.service: Consumed 1.032s CPU time.
Nov 24 09:42:22 compute-1 podman[186029]: 2025-11-24 09:42:22.888196321 +0000 UTC m=+0.022905223 container died e3c4baedaffcfce23f2147ef6f93604f265d93ee34d2b6a77a0ede860308372e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:42:22 compute-1 sudo[186059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vutkabiicicixtsenlcvpxvyyjvuhiuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977342.6817431-1989-238632455525582/AnsiballZ_file.py'
Nov 24 09:42:22 compute-1 sudo[186059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:22 compute-1 systemd[1]: var-lib-containers-storage-overlay-d26831df6c5a0f65cfaba17b14bc54bef5a769ed56fa20fd9c93db15c5a4a386-merged.mount: Deactivated successfully.
Nov 24 09:42:22 compute-1 podman[186029]: 2025-11-24 09:42:22.920008881 +0000 UTC m=+0.054717763 container remove e3c4baedaffcfce23f2147ef6f93604f265d93ee34d2b6a77a0ede860308372e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:42:22 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:42:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:22.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:23 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 09:42:23 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.180s CPU time.
Nov 24 09:42:23 compute-1 python3.9[186067]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:23 compute-1 sudo[186059]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:23.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:23 compute-1 sudo[186244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aitpalyordqgufpvmtedomfionpywglo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977343.2617476-1989-117375227560049/AnsiballZ_file.py'
Nov 24 09:42:23 compute-1 sudo[186244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:23 compute-1 python3.9[186246]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:23 compute-1 sudo[186244]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:42:24 compute-1 sudo[186410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmrjseziraoswyuomljwaodxxenwlwep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977343.8415637-1989-21535818279172/AnsiballZ_file.py'
Nov 24 09:42:24 compute-1 sudo[186410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:24 compute-1 podman[186371]: 2025-11-24 09:42:24.156376901 +0000 UTC m=+0.072952299 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 09:42:24 compute-1 python3.9[186416]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:24 compute-1 sudo[186410]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:24 compute-1 ceph-mon[80009]: pgmap v455: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:42:24 compute-1 sudo[186568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucrychmuombzkkasocienifvzkadgzkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977344.4895146-1989-28967769307881/AnsiballZ_file.py'
Nov 24 09:42:24 compute-1 sudo[186568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:24 compute-1 python3.9[186570]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:24.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:24 compute-1 sudo[186568]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:25 compute-1 sudo[186720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igolgvtzmjezvjvxbklowmhmhitxndia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977345.1097689-1989-224782825468935/AnsiballZ_file.py'
Nov 24 09:42:25 compute-1 sudo[186720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:25.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:25 compute-1 python3.9[186722]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:25 compute-1 sudo[186720]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:26 compute-1 ceph-mon[80009]: pgmap v456: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:42:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:26.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094227 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:42:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:27.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094227 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:42:27 compute-1 sudo[186873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoegdtcyxotwslnwixgltgkwabkjmfxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977347.458738-2286-145390563149549/AnsiballZ_stat.py'
Nov 24 09:42:27 compute-1 sudo[186873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:28 compute-1 python3.9[186875]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:28 compute-1 sudo[186873]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:28 compute-1 sudo[186997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjwngzkzndurkbnnnmmxgwtahftjuwgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977347.458738-2286-145390563149549/AnsiballZ_copy.py'
Nov 24 09:42:28 compute-1 sudo[186997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:28 compute-1 python3.9[186999]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977347.458738-2286-145390563149549/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:28 compute-1 sudo[186997]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:28 compute-1 ceph-mon[80009]: pgmap v457: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:42:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:28.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:28 compute-1 sudo[187149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbhbsuzdafjatsnqjbzfeownucjlonap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977348.724612-2286-35281415839000/AnsiballZ_stat.py'
Nov 24 09:42:28 compute-1 sudo[187149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:42:29 compute-1 python3.9[187151]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:29 compute-1 sudo[187149]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:29.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:29 compute-1 sudo[187272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgbhizmpbdxqesjkhyhnzuprukjmqqsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977348.724612-2286-35281415839000/AnsiballZ_copy.py'
Nov 24 09:42:29 compute-1 sudo[187272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:29 compute-1 python3.9[187274]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977348.724612-2286-35281415839000/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:29 compute-1 sudo[187272]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:30 compute-1 sudo[187425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxdpmzjvfhluttonnjjhjlowpdjwwdhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977350.0146854-2286-133346692089887/AnsiballZ_stat.py'
Nov 24 09:42:30 compute-1 sudo[187425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:30 compute-1 python3.9[187427]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:42:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:42:30 compute-1 sudo[187425]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:30 compute-1 sudo[187548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eamrjuunxyblsubrfbahlrcsbngtuqbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977350.0146854-2286-133346692089887/AnsiballZ_copy.py'
Nov 24 09:42:30 compute-1 sudo[187548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:30 compute-1 python3.9[187550]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977350.0146854-2286-133346692089887/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:30 compute-1 sudo[187548]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:30 compute-1 ceph-mon[80009]: pgmap v458: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:42:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:42:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:30.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:31 compute-1 sudo[187700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzgcpbscpabydskikklaffifycrhmwkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977351.0238206-2286-263474250825543/AnsiballZ_stat.py'
Nov 24 09:42:31 compute-1 sudo[187700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:31 compute-1 python3.9[187702]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:31.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:31 compute-1 sudo[187700]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:31 compute-1 sudo[187823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggdlpbpmajbacizxhqcptbazjprbqiva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977351.0238206-2286-263474250825543/AnsiballZ_copy.py'
Nov 24 09:42:31 compute-1 sudo[187823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:31 compute-1 python3.9[187825]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977351.0238206-2286-263474250825543/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:31 compute-1 sudo[187823]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:32 compute-1 sudo[187976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khkkwecwaxnuwdsonqanmajjmjryyyhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977352.1547716-2286-64856941647534/AnsiballZ_stat.py'
Nov 24 09:42:32 compute-1 sudo[187976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:32 compute-1 python3.9[187978]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:32 compute-1 sudo[187976]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:32 compute-1 sudo[188099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beetgyosxvbahhxtibuawitbhtvajxcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977352.1547716-2286-64856941647534/AnsiballZ_copy.py'
Nov 24 09:42:32 compute-1 sudo[188099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:32 compute-1 ceph-mon[80009]: pgmap v459: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:42:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:32.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:33 compute-1 python3.9[188101]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977352.1547716-2286-64856941647534/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:33 compute-1 sudo[188099]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:33 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 7.
Nov 24 09:42:33 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:42:33 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.180s CPU time.
Nov 24 09:42:33 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:42:33 compute-1 podman[188272]: 2025-11-24 09:42:33.462694649 +0000 UTC m=+0.041140369 container create 5b49fd5439277bce674ba48f12338b0bfe0b639e80f257cc26609ecccc449494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:42:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:33.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:33 compute-1 sudo[188311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksmdmfaiiyikeeddosrioyarowktgmxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977353.2072027-2286-152026052386695/AnsiballZ_stat.py'
Nov 24 09:42:33 compute-1 sudo[188311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09197f8c744ecf42ac0ecb6298ca36537bb06006ed0f17d3b474315923d9a2aa/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:42:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09197f8c744ecf42ac0ecb6298ca36537bb06006ed0f17d3b474315923d9a2aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:42:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09197f8c744ecf42ac0ecb6298ca36537bb06006ed0f17d3b474315923d9a2aa/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:42:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09197f8c744ecf42ac0ecb6298ca36537bb06006ed0f17d3b474315923d9a2aa/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:42:33 compute-1 podman[188272]: 2025-11-24 09:42:33.519964693 +0000 UTC m=+0.098410423 container init 5b49fd5439277bce674ba48f12338b0bfe0b639e80f257cc26609ecccc449494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:42:33 compute-1 podman[188272]: 2025-11-24 09:42:33.527054807 +0000 UTC m=+0.105500517 container start 5b49fd5439277bce674ba48f12338b0bfe0b639e80f257cc26609ecccc449494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:42:33 compute-1 bash[188272]: 5b49fd5439277bce674ba48f12338b0bfe0b639e80f257cc26609ecccc449494
Nov 24 09:42:33 compute-1 podman[188272]: 2025-11-24 09:42:33.445705942 +0000 UTC m=+0.024151672 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:42:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:33 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:42:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:33 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:42:33 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:42:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:33 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:42:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:33 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:42:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:33 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:42:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:33 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:42:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:33 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:42:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:33 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:42:33 compute-1 python3.9[188313]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:33 compute-1 sudo[188311]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:34 compute-1 sudo[188479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuhqydvxirkzfwvcvpbxkffoympivifx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977353.2072027-2286-152026052386695/AnsiballZ_copy.py'
Nov 24 09:42:34 compute-1 sudo[188479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:42:34 compute-1 python3.9[188481]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977353.2072027-2286-152026052386695/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:34 compute-1 sudo[188479]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:34 compute-1 sudo[188631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgzyfywntpahascpdbqzkevyimrjpgyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977354.3437433-2286-53731916561259/AnsiballZ_stat.py'
Nov 24 09:42:34 compute-1 sudo[188631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:34 compute-1 python3.9[188633]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:34 compute-1 sudo[188631]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:42:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:34.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:42:35 compute-1 ceph-mon[80009]: pgmap v460: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.128868) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977355128912, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4660, "num_deletes": 502, "total_data_size": 12833106, "memory_usage": 12990160, "flush_reason": "Manual Compaction"}
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Nov 24 09:42:35 compute-1 sudo[188754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rszyvzevokiebvrbveapzgcrokvpxyvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977354.3437433-2286-53731916561259/AnsiballZ_copy.py'
Nov 24 09:42:35 compute-1 sudo[188754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977355202860, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 8337447, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13340, "largest_seqno": 17994, "table_properties": {"data_size": 8319676, "index_size": 12025, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4677, "raw_key_size": 36611, "raw_average_key_size": 19, "raw_value_size": 8283149, "raw_average_value_size": 4458, "num_data_blocks": 525, "num_entries": 1858, "num_filter_entries": 1858, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976901, "oldest_key_time": 1763976901, "file_creation_time": 1763977355, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 74164 microseconds, and 14811 cpu microseconds.
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.203030) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 8337447 bytes OK
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.203087) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.219838) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.219881) EVENT_LOG_v1 {"time_micros": 1763977355219873, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.219903) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 12812584, prev total WAL file size 12812584, number of live WAL files 2.
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.222973) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(8142KB)], [27(12MB)]
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977355223218, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 21278409, "oldest_snapshot_seqno": -1}
Nov 24 09:42:35 compute-1 python3.9[188756]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977354.3437433-2286-53731916561259/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:35 compute-1 sudo[188754]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 5098 keys, 15470082 bytes, temperature: kUnknown
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977355432132, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 15470082, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15431230, "index_size": 24982, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12805, "raw_key_size": 127564, "raw_average_key_size": 25, "raw_value_size": 15334178, "raw_average_value_size": 3007, "num_data_blocks": 1048, "num_entries": 5098, "num_filter_entries": 5098, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763977355, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.432641) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15470082 bytes
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.468140) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 101.7 rd, 73.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(8.0, 12.3 +0.0 blob) out(14.8 +0.0 blob), read-write-amplify(4.4) write-amplify(1.9) OK, records in: 6120, records dropped: 1022 output_compression: NoCompression
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.468200) EVENT_LOG_v1 {"time_micros": 1763977355468175, "job": 14, "event": "compaction_finished", "compaction_time_micros": 209212, "compaction_time_cpu_micros": 33785, "output_level": 6, "num_output_files": 1, "total_output_size": 15470082, "num_input_records": 6120, "num_output_records": 5098, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977355470268, "job": 14, "event": "table_file_deletion", "file_number": 29}
Nov 24 09:42:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:35.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977355473315, "job": 14, "event": "table_file_deletion", "file_number": 27}
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.222843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.473395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.473429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.473432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.473434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:42:35 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.473436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:42:35 compute-1 sudo[188906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrtuenjqcfcvxaobvjmpzakfnipnnccx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977355.54179-2286-133074154356287/AnsiballZ_stat.py'
Nov 24 09:42:35 compute-1 sudo[188906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:36 compute-1 python3.9[188908]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:36 compute-1 sudo[188906]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:36 compute-1 sudo[189030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dufirelirzadajunmdzfptgzprfddjox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977355.54179-2286-133074154356287/AnsiballZ_copy.py'
Nov 24 09:42:36 compute-1 sudo[189030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:36 compute-1 python3.9[189032]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977355.54179-2286-133074154356287/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:36 compute-1 sudo[189030]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:36.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:36 compute-1 sudo[189182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdrllguqeihyxxbsyuoayvtftzfjvwjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977356.7342687-2286-247192707197602/AnsiballZ_stat.py'
Nov 24 09:42:37 compute-1 sudo[189182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:37 compute-1 ceph-mon[80009]: pgmap v461: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 24 09:42:37 compute-1 python3.9[189184]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:37 compute-1 sudo[189182]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:37.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:37 compute-1 sudo[189305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykwyowuxhgcgzojftsgmgphhflropvhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977356.7342687-2286-247192707197602/AnsiballZ_copy.py'
Nov 24 09:42:37 compute-1 sudo[189305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:37 compute-1 python3.9[189307]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977356.7342687-2286-247192707197602/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:37 compute-1 sudo[189305]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:38 compute-1 sudo[189458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czwpwrzmwfgujpgrnrcqnoyalcoezdlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977357.904956-2286-180587987758880/AnsiballZ_stat.py'
Nov 24 09:42:38 compute-1 sudo[189458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:38 compute-1 python3.9[189460]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:38 compute-1 sudo[189458]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094238 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:42:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [NOTICE] 327/094238 (4) : haproxy version is 2.3.17-d1c9119
Nov 24 09:42:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [NOTICE] 327/094238 (4) : path to executable is /usr/local/sbin/haproxy
Nov 24 09:42:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [ALERT] 327/094238 (4) : backend 'backend' has no server available!
Nov 24 09:42:38 compute-1 sudo[189581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jllahnirbxacomekgpmewqabxxweouxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977357.904956-2286-180587987758880/AnsiballZ_copy.py'
Nov 24 09:42:38 compute-1 sudo[189581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:38 compute-1 python3.9[189583]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977357.904956-2286-180587987758880/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:38 compute-1 sudo[189581]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:38.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:42:39 compute-1 ceph-mon[80009]: pgmap v462: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Nov 24 09:42:39 compute-1 sudo[189733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpxavmyjwrlmnlepqhskeyxectwdhcdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977358.9922838-2286-238558637813457/AnsiballZ_stat.py'
Nov 24 09:42:39 compute-1 sudo[189733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:39 compute-1 python3.9[189735]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:39 compute-1 sudo[189733]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:42:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:39.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:42:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:39 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:42:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:39 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:42:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:39 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:42:39 compute-1 sudo[189856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvwbqqpjuniwteiiveqkqvvculohmhhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977358.9922838-2286-238558637813457/AnsiballZ_copy.py'
Nov 24 09:42:39 compute-1 sudo[189856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:39 compute-1 python3.9[189858]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977358.9922838-2286-238558637813457/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:40 compute-1 sudo[189856]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:40 compute-1 sudo[190009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyvhmnvzickghiaazfogozwafwltyrdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977360.1352117-2286-96924755206331/AnsiballZ_stat.py'
Nov 24 09:42:40 compute-1 sudo[190009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:40 compute-1 python3.9[190011]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:40 compute-1 sudo[190009]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:40.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:41 compute-1 sudo[190132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irxgoxlgqgarxiqwnydlaqldozxwvjbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977360.1352117-2286-96924755206331/AnsiballZ_copy.py'
Nov 24 09:42:41 compute-1 sudo[190132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:41 compute-1 sudo[190135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:42:41 compute-1 sudo[190135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:42:41 compute-1 sudo[190135]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:41 compute-1 ceph-mon[80009]: pgmap v463: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 341 B/s wr, 1 op/s
Nov 24 09:42:41 compute-1 python3.9[190134]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977360.1352117-2286-96924755206331/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:41 compute-1 sudo[190132]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:41 compute-1 sudo[190160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:42:41 compute-1 sudo[190160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:42:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:41.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:41 compute-1 sudo[190160]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:41 compute-1 sudo[190365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czkkdvocfikrmhuzkctgkybeptzthytx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977361.483262-2286-236081785429874/AnsiballZ_stat.py'
Nov 24 09:42:41 compute-1 sudo[190365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:41 compute-1 python3.9[190367]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:41 compute-1 sudo[190365]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:42 compute-1 sudo[190489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yapxxanshfcarpubjzhsupnzhcrlehcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977361.483262-2286-236081785429874/AnsiballZ_copy.py'
Nov 24 09:42:42 compute-1 sudo[190489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:42 compute-1 sudo[190492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:42:42 compute-1 sudo[190492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:42:42 compute-1 sudo[190492]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:42 compute-1 python3.9[190491]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977361.483262-2286-236081785429874/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:42 compute-1 sudo[190489]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:42 compute-1 sudo[190666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqdespjqnnqakgdlxvkudprvbxbhdqve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977362.6367795-2286-150128124622534/AnsiballZ_stat.py'
Nov 24 09:42:42 compute-1 sudo[190666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:42.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:43 compute-1 python3.9[190668]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:43 compute-1 sudo[190666]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:43 compute-1 ceph-mon[80009]: pgmap v464: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:42:43 compute-1 sudo[190789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilrzreaokktwuqxrargzuuucfrfenwet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977362.6367795-2286-150128124622534/AnsiballZ_copy.py'
Nov 24 09:42:43 compute-1 sudo[190789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:43.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:43 compute-1 python3.9[190791]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977362.6367795-2286-150128124622534/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:43 compute-1 sudo[190789]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:43 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:42:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:43 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:42:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:43 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:42:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:44 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:42:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:42:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:44 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:42:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:44 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:42:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:44 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:42:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:42:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:44.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:42:45 compute-1 ceph-mon[80009]: pgmap v465: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:42:45 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:42:45 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:42:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:42:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:42:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:42:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:45.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:42:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:42:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:42:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:42:45 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:42:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:42:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:42:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:42:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:42:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:42:45 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:42:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:42:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:42:45 compute-1 podman[190916]: 2025-11-24 09:42:45.858922859 +0000 UTC m=+0.077167113 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:42:45 compute-1 python3.9[190955]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:42:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:42:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:42:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:42:46 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:42:46 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:42:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:42:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:42:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:42:46 compute-1 sudo[191122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ossxvrdpgdinojwwqtpeinxfsxtpeayf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977366.3463593-2904-196085458775251/AnsiballZ_seboolean.py'
Nov 24 09:42:46 compute-1 sudo[191122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:46.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:46 compute-1 python3.9[191124]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 24 09:42:47 compute-1 ceph-mon[80009]: pgmap v466: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:42:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:47.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=0
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:42:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:42:48 compute-1 sudo[191122]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:48 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:48 compute-1 ceph-mon[80009]: pgmap v467: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 767 B/s wr, 3 op/s
Nov 24 09:42:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:48.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:42:49 compute-1 sudo[191294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adjgrcvebvfmdepbyohohokwrlvqhrkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977368.7721956-2928-110372971326059/AnsiballZ_copy.py'
Nov 24 09:42:49 compute-1 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 24 09:42:49 compute-1 sudo[191294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:49 compute-1 python3.9[191296]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:49 compute-1 sudo[191294]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:49.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:49 compute-1 sudo[191446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhdhyuovxoxsokmrgezhozerulhvylye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977369.3993936-2928-221517085895587/AnsiballZ_copy.py'
Nov 24 09:42:49 compute-1 sudo[191446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:49 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8001950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:49 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:49 compute-1 python3.9[191448]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:49 compute-1 sudo[191446]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:50 compute-1 sudo[191599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hilvqziydkyhidylvysciudfkhiwjwjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977369.9851968-2928-172036650307488/AnsiballZ_copy.py'
Nov 24 09:42:50 compute-1 sudo[191599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:50 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:50 compute-1 python3.9[191601]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:50 compute-1 sudo[191599]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:50 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:42:50 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:42:50 compute-1 ceph-mon[80009]: pgmap v468: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:42:50 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:42:50 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:42:50 compute-1 sudo[191684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:42:50 compute-1 sudo[191684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:42:50 compute-1 sudo[191684]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:50 compute-1 sudo[191776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwhuxkrnuradaxtqwumsuvhhqgcuuzrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977370.618246-2928-275114192530522/AnsiballZ_copy.py'
Nov 24 09:42:50 compute-1 sudo[191776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:50 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:42:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:50 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:42:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:50.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:51 compute-1 python3.9[191778]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:51 compute-1 sudo[191776]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:51 compute-1 sudo[191928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxovdcshdkdkwvvkefndsvusriaedyxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977371.1631265-2928-237846654337958/AnsiballZ_copy.py'
Nov 24 09:42:51 compute-1 sudo[191928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:51.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:51 compute-1 python3.9[191930]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:51 compute-1 sudo[191928]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:51 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094251 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:42:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:51 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8001950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:52 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:52 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:52 compute-1 ceph-mon[80009]: pgmap v469: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Nov 24 09:42:52 compute-1 sudo[192081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxordqmuuelikhnglellrpbzlylkvqwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977372.6460605-3036-254160407339782/AnsiballZ_copy.py'
Nov 24 09:42:52 compute-1 sudo[192081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:42:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:52.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:42:53 compute-1 python3.9[192083]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:53 compute-1 sudo[192081]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094253 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:42:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:53.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:53 compute-1 sudo[192233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcspghczswfaztylcavyqkrqgwzruarv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977373.2534099-3036-274799300449578/AnsiballZ_copy.py'
Nov 24 09:42:53 compute-1 sudo[192233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:53 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:53 compute-1 python3.9[192235]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:53 compute-1 sudo[192233]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:53 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:53 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:42:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:42:54 compute-1 sudo[192386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhkznlglfvrdroltniiovembdbrnxxct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977373.84428-3036-178114913427075/AnsiballZ_copy.py'
Nov 24 09:42:54 compute-1 sudo[192386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:54 compute-1 python3.9[192388]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:54 compute-1 sudo[192386]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:54 compute-1 podman[192389]: 2025-11-24 09:42:54.356209815 +0000 UTC m=+0.079709386 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 24 09:42:54 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:54 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8001950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:54 compute-1 sudo[192557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxfieuxsyynrpcfpjipgbhqhmptwynyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977374.4716973-3036-246077151451517/AnsiballZ_copy.py'
Nov 24 09:42:54 compute-1 sudo[192557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:54 compute-1 ceph-mon[80009]: pgmap v470: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:42:54 compute-1 python3.9[192559]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:54 compute-1 sudo[192557]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:54.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:55 compute-1 sudo[192709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smcxtegllskcibjpxhnjarjmrdxaqdhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977375.0684872-3036-245414689550280/AnsiballZ_copy.py'
Nov 24 09:42:55 compute-1 sudo[192709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:55.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:55 compute-1 python3.9[192711]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:55 compute-1 sudo[192709]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:55 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:55 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:56 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:56 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:56 compute-1 sudo[192862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdffnpahqytnxsbhopgksuukyodflxoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977376.277902-3144-26567453050995/AnsiballZ_systemd.py'
Nov 24 09:42:56 compute-1 sudo[192862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:56 compute-1 ceph-mon[80009]: pgmap v471: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:42:56 compute-1 python3.9[192864]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:42:56 compute-1 systemd[1]: Reloading.
Nov 24 09:42:56 compute-1 systemd-rc-local-generator[192892]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:42:56 compute-1 systemd-sysv-generator[192895]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:42:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:56.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:57 compute-1 systemd[1]: Starting libvirt logging daemon socket...
Nov 24 09:42:57 compute-1 systemd[1]: Listening on libvirt logging daemon socket.
Nov 24 09:42:57 compute-1 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 24 09:42:57 compute-1 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 24 09:42:57 compute-1 systemd[1]: Starting libvirt logging daemon...
Nov 24 09:42:57 compute-1 systemd[1]: Started libvirt logging daemon.
Nov 24 09:42:57 compute-1 sudo[192862]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:57.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:57 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8001950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:57 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:57 compute-1 sudo[193055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agbwkcuccceufqnzjjxrwhixhayygajb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977377.5164163-3144-235032625848482/AnsiballZ_systemd.py'
Nov 24 09:42:57 compute-1 sudo[193055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:58 compute-1 python3.9[193057]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:42:58 compute-1 systemd[1]: Reloading.
Nov 24 09:42:58 compute-1 systemd-rc-local-generator[193089]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:42:58 compute-1 systemd-sysv-generator[193092]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:42:58 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:58 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:58 compute-1 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 24 09:42:58 compute-1 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 24 09:42:58 compute-1 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 24 09:42:58 compute-1 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 24 09:42:58 compute-1 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 24 09:42:58 compute-1 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 24 09:42:58 compute-1 systemd[1]: Starting libvirt nodedev daemon...
Nov 24 09:42:58 compute-1 systemd[1]: Started libvirt nodedev daemon.
Nov 24 09:42:58 compute-1 sudo[193055]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:58 compute-1 ceph-mon[80009]: pgmap v472: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Nov 24 09:42:58 compute-1 sudo[193272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojqcxikobvzcffuaoullcdcxypnpzhon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977378.6452155-3144-270663465421228/AnsiballZ_systemd.py'
Nov 24 09:42:58 compute-1 sudo[193272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:58.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:42:59 compute-1 python3.9[193274]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:42:59 compute-1 systemd[1]: Reloading.
Nov 24 09:42:59 compute-1 systemd-rc-local-generator[193300]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:42:59 compute-1 systemd-sysv-generator[193303]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:42:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:42:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:59.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:59 compute-1 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 24 09:42:59 compute-1 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 24 09:42:59 compute-1 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 24 09:42:59 compute-1 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 24 09:42:59 compute-1 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 24 09:42:59 compute-1 systemd[1]: Starting libvirt proxy daemon...
Nov 24 09:42:59 compute-1 systemd[1]: Started libvirt proxy daemon.
Nov 24 09:42:59 compute-1 sudo[193272]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:59 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:59 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:59 compute-1 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 24 09:43:00 compute-1 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 24 09:43:00 compute-1 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 24 09:43:00 compute-1 sudo[193488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbreylutvmwfizqhtqmxdlerfromodhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977379.790399-3144-171054312644429/AnsiballZ_systemd.py'
Nov 24 09:43:00 compute-1 sudo[193488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:00 compute-1 python3.9[193494]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:43:00 compute-1 systemd[1]: Reloading.
Nov 24 09:43:00 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:00 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:43:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:43:00 compute-1 systemd-rc-local-generator[193524]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:43:00 compute-1 systemd-sysv-generator[193527]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:43:00 compute-1 systemd[1]: Listening on libvirt locking daemon socket.
Nov 24 09:43:00 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094300 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:43:00 compute-1 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 24 09:43:00 compute-1 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 24 09:43:00 compute-1 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 24 09:43:00 compute-1 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 24 09:43:00 compute-1 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 24 09:43:00 compute-1 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 24 09:43:00 compute-1 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 24 09:43:00 compute-1 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 24 09:43:00 compute-1 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 24 09:43:00 compute-1 systemd[1]: Starting libvirt QEMU daemon...
Nov 24 09:43:00 compute-1 systemd[1]: Started libvirt QEMU daemon.
Nov 24 09:43:00 compute-1 sudo[193488]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:00 compute-1 ceph-mon[80009]: pgmap v473: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 852 B/s wr, 3 op/s
Nov 24 09:43:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:43:00 compute-1 setroubleshoot[193311]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 374d308f-c9cc-4cff-9bc3-7134ee063b95
Nov 24 09:43:00 compute-1 setroubleshoot[193311]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 24 09:43:00 compute-1 setroubleshoot[193311]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 374d308f-c9cc-4cff-9bc3-7134ee063b95
Nov 24 09:43:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:00.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:00 compute-1 setroubleshoot[193311]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 24 09:43:01 compute-1 sudo[193711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyrrxexsjqvpepmesnaqxsnpcgaygupl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977380.9092147-3144-271874947937485/AnsiballZ_systemd.py'
Nov 24 09:43:01 compute-1 sudo[193711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:01 compute-1 python3.9[193713]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:43:01 compute-1 systemd[1]: Reloading.
Nov 24 09:43:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:01.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:01 compute-1 systemd-rc-local-generator[193740]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:43:01 compute-1 systemd-sysv-generator[193744]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:43:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:01 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:01 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:01 compute-1 systemd[1]: Starting libvirt secret daemon socket...
Nov 24 09:43:01 compute-1 systemd[1]: Listening on libvirt secret daemon socket.
Nov 24 09:43:01 compute-1 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 24 09:43:01 compute-1 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 24 09:43:01 compute-1 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 24 09:43:01 compute-1 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 24 09:43:01 compute-1 systemd[1]: Starting libvirt secret daemon...
Nov 24 09:43:01 compute-1 systemd[1]: Started libvirt secret daemon.
Nov 24 09:43:01 compute-1 sudo[193711]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:02 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:02 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:02 compute-1 sudo[193798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:43:02 compute-1 sudo[193798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:43:02 compute-1 sudo[193798]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:02 compute-1 ceph-mon[80009]: pgmap v474: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 853 B/s wr, 3 op/s
Nov 24 09:43:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:02.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:03 compute-1 sudo[193948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agjnyidifnjiestdzqybmokgrjozdwok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977383.0930197-3255-172750452643013/AnsiballZ_file.py'
Nov 24 09:43:03 compute-1 sudo[193948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:03.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:03 compute-1 python3.9[193950]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:03 compute-1 sudo[193948]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:03 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:03 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:43:04 compute-1 sudo[194101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymcpbwdpbecpbhjrpbpcpldkrzibdlal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977383.9265654-3279-122896793656672/AnsiballZ_find.py'
Nov 24 09:43:04 compute-1 sudo[194101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:04 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:04 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:04 compute-1 python3.9[194103]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 09:43:04 compute-1 sudo[194101]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:04 compute-1 ceph-mon[80009]: pgmap v475: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Nov 24 09:43:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:43:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:04.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:43:05 compute-1 sudo[194253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bllymkvitgyjrdnifghjpzhljqbbvxlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977384.777482-3303-18739245543265/AnsiballZ_command.py'
Nov 24 09:43:05 compute-1 sudo[194253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:05 compute-1 python3.9[194255]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:43:05 compute-1 sudo[194253]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:05.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:05 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:05 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:06 compute-1 python3.9[194410]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 09:43:06 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:06 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:06 compute-1 ceph-mon[80009]: pgmap v476: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Nov 24 09:43:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:43:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:06.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:43:07 compute-1 python3.9[194560]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:07.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:07 compute-1 python3.9[194681]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977386.6900094-3360-160950336240288/.source.xml follow=False _original_basename=secret.xml.j2 checksum=50e2d7af60e90224d932c14cb656694b42455a32 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:07 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:07 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:08 compute-1 sudo[194832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uggxclhwzuhdwogsevwlnrukhmptlqxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977388.0183308-3405-224780705489904/AnsiballZ_command.py'
Nov 24 09:43:08 compute-1 sudo[194832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:08 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:08 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:08 compute-1 python3.9[194834]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 84a084c3-61a7-5de7-8207-1f88efa59a64
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:43:08 compute-1 polkitd[43353]: Registered Authentication Agent for unix-process:194836:333203 (system bus name :1.1861 [pkttyagent --process 194836 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 24 09:43:08 compute-1 polkitd[43353]: Unregistered Authentication Agent for unix-process:194836:333203 (system bus name :1.1861, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 24 09:43:08 compute-1 polkitd[43353]: Registered Authentication Agent for unix-process:194835:333203 (system bus name :1.1862 [pkttyagent --process 194835 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 24 09:43:08 compute-1 polkitd[43353]: Unregistered Authentication Agent for unix-process:194835:333203 (system bus name :1.1862, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 24 09:43:08 compute-1 sudo[194832]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:08 compute-1 ceph-mon[80009]: pgmap v477: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 0 op/s
Nov 24 09:43:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:08.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:43:09 compute-1 python3.9[194996]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:09.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:09 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:09 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:09 compute-1 sudo[195147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnbgqsifmpyerrwdhadpfvadsetsyscf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977389.7184846-3453-159116760654046/AnsiballZ_command.py'
Nov 24 09:43:09 compute-1 sudo[195147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:10 compute-1 sudo[195147]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:10 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:10 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8002e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:10 compute-1 ceph-mon[80009]: pgmap v478: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:43:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:43:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:10.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:43:11 compute-1 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 24 09:43:11 compute-1 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 24 09:43:11 compute-1 sudo[195300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpwhhemrvnueipxpjvdegxklylejjlla ; FSID=84a084c3-61a7-5de7-8207-1f88efa59a64 KEY=AQBLJCRpAAAAABAAXAzKB5itq82KD4bRedT2Ig== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977390.8154604-3477-122112979139199/AnsiballZ_command.py'
Nov 24 09:43:11 compute-1 sudo[195300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:11 compute-1 polkitd[43353]: Registered Authentication Agent for unix-process:195303:333487 (system bus name :1.1865 [pkttyagent --process 195303 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 24 09:43:11 compute-1 polkitd[43353]: Unregistered Authentication Agent for unix-process:195303:333487 (system bus name :1.1865, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 24 09:43:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:11.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:11 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8002e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:11 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:12 compute-1 sudo[195300]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:12 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:12 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:12 compute-1 sudo[195459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbmfbcijkgsqvesfuaqumswzgxckkoac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977392.523601-3501-5041754942049/AnsiballZ_copy.py'
Nov 24 09:43:12 compute-1 sudo[195459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:12 compute-1 ceph-mon[80009]: pgmap v479: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:43:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:12.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:12 compute-1 python3.9[195461]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:13 compute-1 sudo[195459]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:13.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:13 compute-1 sudo[195611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gafkkxgelusfpirmptcrhgtapzzybwrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977393.3419552-3525-228444245890567/AnsiballZ_stat.py'
Nov 24 09:43:13 compute-1 sudo[195611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:13 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:13 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:13 compute-1 python3.9[195613]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:13 compute-1 sudo[195611]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:43:14 compute-1 sudo[195735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utxdypbmcvrdkcarhzobtarfclqywxbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977393.3419552-3525-228444245890567/AnsiballZ_copy.py'
Nov 24 09:43:14 compute-1 sudo[195735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:14 compute-1 python3.9[195737]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977393.3419552-3525-228444245890567/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:14 compute-1 sudo[195735]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:14 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:14 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:14 compute-1 ceph-mon[80009]: pgmap v480: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:14.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:15 compute-1 sudo[195887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syhbitzztxgtvgpjkxvnkooniokzcxsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977394.7935176-3573-160878360748797/AnsiballZ_file.py'
Nov 24 09:43:15 compute-1 sudo[195887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:15 compute-1 python3.9[195889]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:15 compute-1 sudo[195887]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:15 compute-1 auditd[703]: Audit daemon rotating log files
Nov 24 09:43:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:43:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:43:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:15.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:15 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:15 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:43:16 compute-1 sudo[196055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asknsodkbtzflhtvzjbpqotachkwqlzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977395.6102972-3597-265817749104205/AnsiballZ_stat.py'
Nov 24 09:43:16 compute-1 sudo[196055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:16 compute-1 podman[196014]: 2025-11-24 09:43:16.169953926 +0000 UTC m=+0.076068875 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:43:16 compute-1 python3.9[196062]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:16 compute-1 sudo[196055]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:16 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:16 compute-1 sudo[196144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzifqfebmjkkcbykdblhzyudbrewtkhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977395.6102972-3597-265817749104205/AnsiballZ_file.py'
Nov 24 09:43:16 compute-1 sudo[196144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:16 compute-1 python3.9[196146]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:16 compute-1 sudo[196144]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:16.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:17 compute-1 ceph-mon[80009]: pgmap v481: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:17 compute-1 sudo[196296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjvrqbyzwkzvogisgsgrgjnsmutfjiao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977397.15411-3633-99465437577210/AnsiballZ_stat.py'
Nov 24 09:43:17 compute-1 sudo[196296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:17.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:17 compute-1 python3.9[196298]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:17 compute-1 sudo[196296]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:17 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:17 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:17 compute-1 sudo[196374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrxpvrfnjtugniwxoembbbvqydknkuex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977397.15411-3633-99465437577210/AnsiballZ_file.py'
Nov 24 09:43:17 compute-1 sudo[196374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:18 compute-1 python3.9[196377]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.9hfc3gfd recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:18 compute-1 sudo[196374]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.370814) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977398370903, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 682, "num_deletes": 252, "total_data_size": 1399932, "memory_usage": 1419952, "flush_reason": "Manual Compaction"}
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977398379776, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 646954, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17999, "largest_seqno": 18676, "table_properties": {"data_size": 643929, "index_size": 933, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7731, "raw_average_key_size": 19, "raw_value_size": 637731, "raw_average_value_size": 1647, "num_data_blocks": 41, "num_entries": 387, "num_filter_entries": 387, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763977356, "oldest_key_time": 1763977356, "file_creation_time": 1763977398, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 9037 microseconds, and 5007 cpu microseconds.
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.379850) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 646954 bytes OK
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.379873) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.381200) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.381213) EVENT_LOG_v1 {"time_micros": 1763977398381209, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.381227) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1396202, prev total WAL file size 1396202, number of live WAL files 2.
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.381783) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353035' seq:0, type:0; will stop at (end)
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(631KB)], [30(14MB)]
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977398381832, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 16117036, "oldest_snapshot_seqno": -1}
Nov 24 09:43:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:18 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 4984 keys, 12264381 bytes, temperature: kUnknown
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977398450012, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 12264381, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12230374, "index_size": 20457, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 125569, "raw_average_key_size": 25, "raw_value_size": 12139400, "raw_average_value_size": 2435, "num_data_blocks": 850, "num_entries": 4984, "num_filter_entries": 4984, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763977398, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.450295) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 12264381 bytes
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.451488) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 236.0 rd, 179.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 14.8 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(43.9) write-amplify(19.0) OK, records in: 5485, records dropped: 501 output_compression: NoCompression
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.451513) EVENT_LOG_v1 {"time_micros": 1763977398451502, "job": 16, "event": "compaction_finished", "compaction_time_micros": 68287, "compaction_time_cpu_micros": 45190, "output_level": 6, "num_output_files": 1, "total_output_size": 12264381, "num_input_records": 5485, "num_output_records": 4984, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977398452211, "job": 16, "event": "table_file_deletion", "file_number": 32}
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977398456630, "job": 16, "event": "table_file_deletion", "file_number": 30}
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.381679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.456732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.456737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.456739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.456742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:43:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.456744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:43:18 compute-1 sudo[196527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlsbqurlzojwpujdkvubqkwtfryuwcaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977398.5092914-3669-268300252909614/AnsiballZ_stat.py'
Nov 24 09:43:18 compute-1 sudo[196527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:18 compute-1 python3.9[196529]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:18 compute-1 sudo[196527]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:43:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:18.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:43:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:43:19 compute-1 ceph-mon[80009]: pgmap v482: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:43:19 compute-1 sudo[196605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esbdhsndghutpfktugpyolgjqqsckumr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977398.5092914-3669-268300252909614/AnsiballZ_file.py'
Nov 24 09:43:19 compute-1 sudo[196605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:19 compute-1 python3.9[196607]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:19 compute-1 sudo[196605]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:19.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:19 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:19 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:20 compute-1 sudo[196761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xykfbqhzvqhihdgaeiduqpvazxnpojyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977399.781187-3708-252047498006572/AnsiballZ_command.py'
Nov 24 09:43:20 compute-1 sudo[196761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:43:20.042 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:43:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:43:20.043 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:43:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:43:20.043 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:43:20 compute-1 python3.9[196763]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:43:20 compute-1 sudo[196761]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:20 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:20 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:43:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:21.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:43:21 compute-1 ceph-mon[80009]: pgmap v483: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:21 compute-1 sudo[196914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxahjzcfnbiyigyncizzrdlefwhkrnxw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763977400.6718893-3732-2320404993224/AnsiballZ_edpm_nftables_from_files.py'
Nov 24 09:43:21 compute-1 sudo[196914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:21 compute-1 python3[196916]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 09:43:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:21.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:21 compute-1 sudo[196914]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:21 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:21 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:22 compute-1 sudo[197067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhrdqexqrhvmggbtvypcrtoxdwvpmbxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977401.899834-3756-19313840641904/AnsiballZ_stat.py'
Nov 24 09:43:22 compute-1 sudo[197067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:22 compute-1 python3.9[197069]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:22 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:22 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:22 compute-1 sudo[197067]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:22 compute-1 sudo[197145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psnkolijxpvrdfbzynaioeadxkvinauk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977401.899834-3756-19313840641904/AnsiballZ_file.py'
Nov 24 09:43:22 compute-1 sudo[197145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:22 compute-1 sudo[197148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:43:22 compute-1 sudo[197148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:43:22 compute-1 sudo[197148]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:22 compute-1 python3.9[197147]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:22 compute-1 sudo[197145]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:23.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:23 compute-1 ceph-mon[80009]: pgmap v484: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:43:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:23.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:23 compute-1 sudo[197322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvjgrqumnttyydpqevxkwllkflxzesen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977403.2598114-3792-4615761747196/AnsiballZ_stat.py'
Nov 24 09:43:23 compute-1 sudo[197322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:23 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:23 compute-1 python3.9[197324]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:23 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:23 compute-1 sudo[197322]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:23 compute-1 sudo[197401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abwkudcivyygygybghoefiualfnmjsdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977403.2598114-3792-4615761747196/AnsiballZ_file.py'
Nov 24 09:43:23 compute-1 sudo[197401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:43:24 compute-1 python3.9[197403]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:24 compute-1 sudo[197401]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:24 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:24 compute-1 sudo[197568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzgjhmusklbjtwdougjozxhuuhahnuqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977404.6116242-3828-152622017710613/AnsiballZ_stat.py'
Nov 24 09:43:24 compute-1 sudo[197568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:24 compute-1 podman[197527]: 2025-11-24 09:43:24.921457265 +0000 UTC m=+0.056360472 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 24 09:43:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:43:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:25.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:43:25 compute-1 ceph-mon[80009]: pgmap v485: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:25 compute-1 python3.9[197574]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:25 compute-1 sudo[197568]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:25 compute-1 sudo[197651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhaovipsqllddbanvdunuutpgrspotzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977404.6116242-3828-152622017710613/AnsiballZ_file.py'
Nov 24 09:43:25 compute-1 sudo[197651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:25.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:25 compute-1 python3.9[197653]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:25 compute-1 sudo[197651]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:25 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:25 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:26 compute-1 sudo[197804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmcixmvcgejqovfqefvpajyinwgyzwmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977405.9989085-3864-259924663321172/AnsiballZ_stat.py'
Nov 24 09:43:26 compute-1 sudo[197804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:26 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:26 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:26 compute-1 python3.9[197806]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:26 compute-1 sudo[197804]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:26 compute-1 sudo[197882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpibeqqrpsyzmpgzdspzhmzafcnjywrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977405.9989085-3864-259924663321172/AnsiballZ_file.py'
Nov 24 09:43:26 compute-1 sudo[197882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:26 compute-1 python3.9[197884]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:26 compute-1 sudo[197882]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:27.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:27 compute-1 ceph-mon[80009]: pgmap v486: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:43:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:27.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:43:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:27 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:27 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:27 compute-1 sudo[198034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxzdfpxmaojbhapzyyioaksfishgkjqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977407.3683364-3900-240333734578460/AnsiballZ_stat.py'
Nov 24 09:43:27 compute-1 sudo[198034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:27 compute-1 python3.9[198036]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:28 compute-1 sudo[198034]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:28 compute-1 sudo[198160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdrcutfhyllhsonrcxjhxhfwbpgnitdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977407.3683364-3900-240333734578460/AnsiballZ_copy.py'
Nov 24 09:43:28 compute-1 sudo[198160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:28 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:28 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:28 compute-1 python3.9[198162]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977407.3683364-3900-240333734578460/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:28 compute-1 sudo[198160]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:29.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:43:29 compute-1 ceph-mon[80009]: pgmap v487: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:43:29 compute-1 sudo[198312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpvqxtkucbfrplrgividfeghpuaagncp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977408.9238057-3945-221733837204670/AnsiballZ_file.py'
Nov 24 09:43:29 compute-1 sudo[198312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:29 compute-1 python3.9[198314]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:29 compute-1 sudo[198312]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:29.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:29 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:29 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:29 compute-1 sudo[198465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaahfnyjwckvvchorknvsjtclglfkkvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977409.686705-3969-221968014100778/AnsiballZ_command.py'
Nov 24 09:43:29 compute-1 sudo[198465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:30 compute-1 python3.9[198467]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:43:30 compute-1 sudo[198465]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:43:30 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:30 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:43:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:31.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:31 compute-1 sudo[198620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osyweqmigsdormlyfqwghjlgwxqxqzos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977410.656187-3993-95357982572570/AnsiballZ_blockinfile.py'
Nov 24 09:43:31 compute-1 sudo[198620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:31 compute-1 ceph-mon[80009]: pgmap v488: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:43:31 compute-1 python3.9[198622]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:31 compute-1 sudo[198620]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:31.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:31 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:31 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:31 compute-1 sudo[198773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmjfhcfeajjevtqjkuxnvoqmkgdlueqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977411.698531-4020-112506407010698/AnsiballZ_command.py'
Nov 24 09:43:31 compute-1 sudo[198773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:32 compute-1 python3.9[198775]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:43:32 compute-1 sudo[198773]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:32 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:32 compute-1 sudo[198926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koywsccpgvgkyqjiwoqvrgidtkuyaset ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977412.4725273-4044-173949171337892/AnsiballZ_stat.py'
Nov 24 09:43:32 compute-1 sudo[198926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:32 compute-1 python3.9[198928]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:43:32 compute-1 sudo[198926]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:33.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:33 compute-1 ceph-mon[80009]: pgmap v489: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:43:33 compute-1 sudo[199080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bihnjwppbrgnzzjcmzmzmzmtbfjcqtis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977413.2752895-4068-142517049100188/AnsiballZ_command.py'
Nov 24 09:43:33 compute-1 sudo[199080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:33.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:33 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:33 compute-1 python3.9[199082]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:43:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:33 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:33 compute-1 sudo[199080]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:43:34 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:34 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:34 compute-1 sudo[199236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptzfsgyvjlnakxecdmqblqajfutqdlmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977414.1572492-4093-159208786210366/AnsiballZ_file.py'
Nov 24 09:43:34 compute-1 sudo[199236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:34 compute-1 python3.9[199238]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:34 compute-1 sudo[199236]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:35.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:35 compute-1 ceph-mon[80009]: pgmap v490: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:35 compute-1 sudo[199388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikbwyfyksgldomfzsnawjndnsmptoptj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977414.9819233-4116-252288682421007/AnsiballZ_stat.py'
Nov 24 09:43:35 compute-1 sudo[199388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:35 compute-1 python3.9[199390]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:35 compute-1 sudo[199388]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:43:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:35.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:43:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:35 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:35 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:35 compute-1 sudo[199511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zypvvuxsxgbklsfsavhnevpdqlezaeox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977414.9819233-4116-252288682421007/AnsiballZ_copy.py'
Nov 24 09:43:35 compute-1 sudo[199511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:35 compute-1 python3.9[199513]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977414.9819233-4116-252288682421007/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:36 compute-1 sudo[199511]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:36 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:36 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:36 compute-1 sudo[199664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iegpeyyhdaoqmusdxluorcpnqfkggviz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977416.4636953-4161-130955348161502/AnsiballZ_stat.py'
Nov 24 09:43:36 compute-1 sudo[199664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:36 compute-1 python3.9[199666]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:36 compute-1 sudo[199664]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:43:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:37.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:43:37 compute-1 ceph-mon[80009]: pgmap v491: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:37 compute-1 sudo[199787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xneyaroyuoanrfjuduncgzekhsichjgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977416.4636953-4161-130955348161502/AnsiballZ_copy.py'
Nov 24 09:43:37 compute-1 sudo[199787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:37 compute-1 python3.9[199789]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977416.4636953-4161-130955348161502/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:37 compute-1 sudo[199787]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:37.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:37 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:37 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:38 compute-1 sudo[199940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oriplasailuxrmgguiscetepvbotpzsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977418.037891-4206-53680428047407/AnsiballZ_stat.py'
Nov 24 09:43:38 compute-1 sudo[199940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:38 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:38 compute-1 python3.9[199942]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:38 compute-1 sudo[199940]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:38 compute-1 sudo[200063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtrstylunwmceqffbyahwekmnvynagzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977418.037891-4206-53680428047407/AnsiballZ_copy.py'
Nov 24 09:43:38 compute-1 sudo[200063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:39 compute-1 python3.9[200065]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977418.037891-4206-53680428047407/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:39.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:43:39 compute-1 sudo[200063]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:39 compute-1 ceph-mon[80009]: pgmap v492: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:43:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094339 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:43:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:39.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:39 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:39 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:39 compute-1 sudo[200215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrgcwhxienhafifmkjrexynoxypudoln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977419.5131366-4251-136762586350112/AnsiballZ_systemd.py'
Nov 24 09:43:39 compute-1 sudo[200215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:40 compute-1 python3.9[200217]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:43:40 compute-1 systemd[1]: Reloading.
Nov 24 09:43:40 compute-1 systemd-rc-local-generator[200245]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:43:40 compute-1 systemd-sysv-generator[200249]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:43:40 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:40 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:40 compute-1 systemd[1]: Reached target edpm_libvirt.target.
Nov 24 09:43:40 compute-1 sudo[200215]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:41.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:41 compute-1 sudo[200408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyhzhwhaqmligtmvglwezxoyuezukfej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977420.890955-4275-121136416328735/AnsiballZ_systemd.py'
Nov 24 09:43:41 compute-1 sudo[200408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:41 compute-1 ceph-mon[80009]: pgmap v493: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:41 compute-1 python3.9[200410]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 24 09:43:41 compute-1 systemd[1]: Reloading.
Nov 24 09:43:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:43:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:41.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:43:41 compute-1 systemd-rc-local-generator[200436]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:43:41 compute-1 systemd-sysv-generator[200440]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:43:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:41 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:41 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:41 compute-1 systemd[1]: Reloading.
Nov 24 09:43:41 compute-1 systemd-rc-local-generator[200474]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:43:41 compute-1 systemd-sysv-generator[200478]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:43:42 compute-1 sudo[200408]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:42 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:42 compute-1 sshd-session[142523]: Connection closed by 192.168.122.30 port 59306
Nov 24 09:43:42 compute-1 sshd-session[142520]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:43:42 compute-1 systemd[1]: session-52.scope: Deactivated successfully.
Nov 24 09:43:42 compute-1 systemd[1]: session-52.scope: Consumed 3min 19.360s CPU time.
Nov 24 09:43:42 compute-1 systemd-logind[823]: Session 52 logged out. Waiting for processes to exit.
Nov 24 09:43:42 compute-1 systemd-logind[823]: Removed session 52.
Nov 24 09:43:42 compute-1 sudo[200508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:43:42 compute-1 sudo[200508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:43:42 compute-1 sudo[200508]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:43.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:43:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:43.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:43:43 compute-1 ceph-mon[80009]: pgmap v494: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:43:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:43 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:43 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:43:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:44 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:44 compute-1 ceph-mon[80009]: pgmap v495: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:43:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:45.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:43:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:43:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:45.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:45 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:43:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:45 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:45 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:46 compute-1 podman[200535]: 2025-11-24 09:43:46.397284237 +0000 UTC m=+0.125445898 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:43:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:46 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:46 compute-1 ceph-mon[80009]: pgmap v496: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:43:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:47.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:43:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:47.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:43:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:43:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:48 compute-1 sshd-session[200562]: Accepted publickey for zuul from 192.168.122.30 port 44340 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:43:48 compute-1 systemd-logind[823]: New session 53 of user zuul.
Nov 24 09:43:48 compute-1 systemd[1]: Started Session 53 of User zuul.
Nov 24 09:43:48 compute-1 sshd-session[200562]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:43:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:48 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:48 compute-1 ceph-mon[80009]: pgmap v497: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:43:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:49.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:43:49 compute-1 python3.9[200715]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:43:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:49.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:49 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:49 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:50 compute-1 python3.9[200870]: ansible-ansible.builtin.service_facts Invoked
Nov 24 09:43:50 compute-1 network[200887]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 09:43:50 compute-1 network[200888]: 'network-scripts' will be removed from distribution in near future.
Nov 24 09:43:50 compute-1 network[200889]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 09:43:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:50 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:50 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:43:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:50 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:43:50 compute-1 ceph-mon[80009]: pgmap v498: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:43:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:51.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:51 compute-1 sudo[200895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:43:51 compute-1 sudo[200895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:43:51 compute-1 sudo[200895]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:51 compute-1 sudo[200921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:43:51 compute-1 sudo[200921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:43:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:51.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:51 compute-1 sudo[200921]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:51 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:51 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:51 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:43:51 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:43:51 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:43:51 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:43:51 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:43:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:43:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:43:52 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:43:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:43:52 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:43:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:43:52 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:43:52 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:52 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:52 compute-1 ceph-mon[80009]: pgmap v499: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:43:52 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:43:52 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:43:52 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:43:52 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:43:52 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:43:52 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:43:52 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:43:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:43:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:53.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:43:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:53.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:53 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000015:nfs.cephfs.0: -2
Nov 24 09:43:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:53 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:43:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:53 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:53 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:43:54 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:54 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:54 compute-1 sudo[201244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxrowktlphlqyjpgecryqdgpmoynfvdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977434.4179232-102-226858670526989/AnsiballZ_setup.py'
Nov 24 09:43:54 compute-1 sudo[201244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:54 compute-1 ceph-mon[80009]: pgmap v500: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:43:55 compute-1 python3.9[201246]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:43:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:55.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:55 compute-1 sudo[201244]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:55 compute-1 podman[201255]: 2025-11-24 09:43:55.342708699 +0000 UTC m=+0.063678813 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 24 09:43:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:43:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:55.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:43:55 compute-1 sudo[201347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tweqlbsmkaermdkhcnquzfuixygqufpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977434.4179232-102-226858670526989/AnsiballZ_dnf.py'
Nov 24 09:43:55 compute-1 sudo[201347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:55 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:55 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:56 compute-1 python3.9[201349]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:43:56 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:56 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:56 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:43:56 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:43:56 compute-1 ceph-mon[80009]: pgmap v501: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:43:56 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:43:56 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:43:56 compute-1 sudo[201352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:43:56 compute-1 sudo[201352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:43:56 compute-1 sudo[201352]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:57.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:57.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:57 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:57 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:58 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:58 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:58 compute-1 ceph-mon[80009]: pgmap v502: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:43:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:43:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:59.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094359 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:43:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:43:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:43:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:59.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:43:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:59 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:59 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:44:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:44:00 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:00 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:00 compute-1 ceph-mon[80009]: pgmap v503: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:44:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:44:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:01.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:01 compute-1 sudo[201347]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:01.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:01 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:01 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:02 compute-1 sudo[201529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndwguxgwohismavfbpncbytfenwocvxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977441.7311022-138-106744560649046/AnsiballZ_stat.py'
Nov 24 09:44:02 compute-1 sudo[201529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:02 compute-1 python3.9[201531]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:44:02 compute-1 sudo[201529]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:02 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:02 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:02 compute-1 sudo[201608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:44:02 compute-1 sudo[201608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:44:02 compute-1 sudo[201608]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:02 compute-1 ceph-mon[80009]: pgmap v504: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:44:03 compute-1 sudo[201706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpwoojxthiiikjaqnotzipgrixjzcxnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977442.6713347-168-142411696606493/AnsiballZ_command.py'
Nov 24 09:44:03 compute-1 sudo[201706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:03.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:03 compute-1 python3.9[201708]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:44:03 compute-1 sudo[201706]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:03.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:03 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:03 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:03 compute-1 sudo[201860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdnnjhwwlizrmunrdkzqavdkugcebojs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977443.72947-198-13828358283685/AnsiballZ_stat.py'
Nov 24 09:44:03 compute-1 sudo[201860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:44:04 compute-1 python3.9[201862]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:44:04 compute-1 sudo[201860]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:04 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:04 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:04 compute-1 sudo[202012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqnierpmtxlfxfsztiajwusbflmrzfye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977444.494425-222-30920684522095/AnsiballZ_command.py'
Nov 24 09:44:04 compute-1 sudo[202012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:04 compute-1 python3.9[202014]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:44:04 compute-1 ceph-mon[80009]: pgmap v505: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:44:04 compute-1 sudo[202012]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:05.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:05 compute-1 sudo[202165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsobipouqakydmegfxooxjggqvtqozal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977445.3305414-246-187096666658306/AnsiballZ_stat.py'
Nov 24 09:44:05 compute-1 sudo[202165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:05.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:05 compute-1 python3.9[202167]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:44:05 compute-1 sudo[202165]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:05 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:05 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:06 compute-1 sudo[202289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlkapwvqjaygmtygifzeopgnttcjadqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977445.3305414-246-187096666658306/AnsiballZ_copy.py'
Nov 24 09:44:06 compute-1 sudo[202289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:06 compute-1 python3.9[202291]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977445.3305414-246-187096666658306/.source.iscsi _original_basename=.t0qswyzp follow=False checksum=cd5378efa417da90db15e5c3bc37bc9ae6376a29 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:06 compute-1 sudo[202289]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:06 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:06 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:06 compute-1 ceph-mon[80009]: pgmap v506: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:44:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:07.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:07 compute-1 sudo[202441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usuefourjpfbuiusmeqnfxvaoftbgwoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977446.83219-291-184618884519465/AnsiballZ_file.py'
Nov 24 09:44:07 compute-1 sudo[202441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:07 compute-1 python3.9[202443]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:07 compute-1 sudo[202441]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:07.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:07 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:07 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:08 compute-1 sudo[202594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlittidqwahtlydlzjsaprujoqwyyhms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977447.6253543-315-110158874366139/AnsiballZ_lineinfile.py'
Nov 24 09:44:08 compute-1 sudo[202594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:08 compute-1 python3.9[202596]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:08 compute-1 sudo[202594]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:08 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:08 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:08 compute-1 ceph-mon[80009]: pgmap v507: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:44:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:44:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:44:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:09.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:44:09 compute-1 sudo[202746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvdatgswetycgsrbdijpmlfrnixwihii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977448.575518-342-176587491167093/AnsiballZ_systemd_service.py'
Nov 24 09:44:09 compute-1 sudo[202746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:09 compute-1 python3.9[202748]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:44:09 compute-1 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 24 09:44:09 compute-1 sudo[202746]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:09.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:09 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:09 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:10 compute-1 sudo[202903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggpjbgatihqziqkzphgzdnfwiredjqxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977449.938106-366-203626822269047/AnsiballZ_systemd_service.py'
Nov 24 09:44:10 compute-1 sudo[202903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:10 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:10 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:10 compute-1 python3.9[202905]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:44:10 compute-1 systemd[1]: Reloading.
Nov 24 09:44:10 compute-1 systemd-sysv-generator[202937]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:44:10 compute-1 systemd-rc-local-generator[202934]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:44:10 compute-1 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 24 09:44:10 compute-1 systemd[1]: Starting Open-iSCSI...
Nov 24 09:44:10 compute-1 kernel: Loading iSCSI transport class v2.0-870.
Nov 24 09:44:10 compute-1 systemd[1]: Started Open-iSCSI.
Nov 24 09:44:10 compute-1 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 24 09:44:10 compute-1 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 24 09:44:10 compute-1 sudo[202903]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:10 compute-1 ceph-mon[80009]: pgmap v508: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:44:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:11.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:11.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:11 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:11 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:12 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:12 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:12 compute-1 ceph-mon[80009]: pgmap v509: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:44:13 compute-1 sudo[203106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chbvcfanbnwnopaqwcjiuwgqjqjqfhuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977452.7901714-399-1745225850604/AnsiballZ_service_facts.py'
Nov 24 09:44:13 compute-1 sudo[203106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:13.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:13 compute-1 python3.9[203108]: ansible-ansible.builtin.service_facts Invoked
Nov 24 09:44:13 compute-1 network[203125]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 09:44:13 compute-1 network[203126]: 'network-scripts' will be removed from distribution in near future.
Nov 24 09:44:13 compute-1 network[203127]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 09:44:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:13.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:13 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec001fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:13 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:44:14 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:14 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:15 compute-1 ceph-mon[80009]: pgmap v510: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:44:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:15.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:44:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:44:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:15.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:15 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:15 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec001fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:44:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:16 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:16 compute-1 podman[203205]: 2025-11-24 09:44:16.518830384 +0000 UTC m=+0.077758321 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 09:44:17 compute-1 ceph-mon[80009]: pgmap v511: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:44:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:17.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:17 compute-1 sudo[203106]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:17.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:17 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:17 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:17 compute-1 sudo[203424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-comsyydjpbxpvtmedxuryksdtvispvkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977457.7100139-429-197823397716084/AnsiballZ_file.py'
Nov 24 09:44:17 compute-1 sudo[203424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:18 compute-1 python3.9[203426]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 09:44:18 compute-1 sudo[203424]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:18 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec001fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:18 compute-1 sudo[203576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdyutmmxtndzvgqsthevhydcsbhaqnkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977458.5164454-453-27754787770491/AnsiballZ_modprobe.py'
Nov 24 09:44:18 compute-1 sudo[203576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:44:19 compute-1 ceph-mon[80009]: pgmap v512: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:44:19 compute-1 python3.9[203578]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 24 09:44:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:19.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:19 compute-1 sudo[203576]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:19.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:19 compute-1 sudo[203732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpnziecigyqnagmhgwwdcdmnawprggbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977459.400968-477-131859846386984/AnsiballZ_stat.py'
Nov 24 09:44:19 compute-1 sudo[203732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:19 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:19 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:19 compute-1 python3.9[203734]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:44:19 compute-1 sudo[203732]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:44:20.044 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:44:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:44:20.044 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:44:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:44:20.044 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:44:20 compute-1 sudo[203856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxvvufnkwnygnhcnixtznkwpvyonyxog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977459.400968-477-131859846386984/AnsiballZ_copy.py'
Nov 24 09:44:20 compute-1 sudo[203856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:20 compute-1 python3.9[203858]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977459.400968-477-131859846386984/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:20 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:20 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:20 compute-1 sudo[203856]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:21 compute-1 ceph-mon[80009]: pgmap v513: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:44:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:21.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:21 compute-1 sudo[204008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tinkekkpidgljsprydkepecwikcullij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977460.9392552-525-140579286144543/AnsiballZ_lineinfile.py'
Nov 24 09:44:21 compute-1 sudo[204008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094421 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:44:21 compute-1 python3.9[204010]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:21 compute-1 sudo[204008]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:21.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:21 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:21 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:22 compute-1 sudo[204161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgvxtewwnmunpffkyodeitnvbzcsvgir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977461.7257707-549-42437273446232/AnsiballZ_systemd.py'
Nov 24 09:44:22 compute-1 sudo[204161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:22 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:22 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:22 compute-1 python3.9[204163]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:44:22 compute-1 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 24 09:44:22 compute-1 systemd[1]: Stopped Load Kernel Modules.
Nov 24 09:44:22 compute-1 systemd[1]: Stopping Load Kernel Modules...
Nov 24 09:44:22 compute-1 systemd[1]: Starting Load Kernel Modules...
Nov 24 09:44:22 compute-1 systemd[1]: Finished Load Kernel Modules.
Nov 24 09:44:22 compute-1 sudo[204161]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:22 compute-1 sudo[204192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:44:22 compute-1 sudo[204192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:44:22 compute-1 sudo[204192]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:23.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:23 compute-1 ceph-mon[80009]: pgmap v514: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:44:23 compute-1 sudo[204342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnhxamkbsixumexhcnxtrabfagrlmkga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977462.9925716-574-187938998203607/AnsiballZ_file.py'
Nov 24 09:44:23 compute-1 sudo[204342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:23 compute-1 python3.9[204344]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:44:23 compute-1 sudo[204342]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:23.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:23 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:23 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:44:24 compute-1 sudo[204495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oephqgewkynemgxkupnzfjyicuscgglz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977463.8618424-600-234145323539379/AnsiballZ_stat.py'
Nov 24 09:44:24 compute-1 sudo[204495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:24 compute-1 python3.9[204497]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:44:24 compute-1 sudo[204495]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:24 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:25 compute-1 sudo[204647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvfolorakzhfsjdktbqbumjliouxbqfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977464.7333567-627-71702211981972/AnsiballZ_stat.py'
Nov 24 09:44:25 compute-1 sudo[204647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:25.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:25 compute-1 ceph-mon[80009]: pgmap v515: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:44:25 compute-1 python3.9[204649]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:44:25 compute-1 sudo[204647]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:25.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:25 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:25 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:25 compute-1 sudo[204809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckfomaweprgwhgkztrpwoduwgwbxbhsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977465.5503452-651-81210425115099/AnsiballZ_stat.py'
Nov 24 09:44:25 compute-1 sudo[204809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:25 compute-1 podman[204773]: 2025-11-24 09:44:25.915266132 +0000 UTC m=+0.097942659 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 24 09:44:26 compute-1 python3.9[204817]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:44:26 compute-1 sudo[204809]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:26 compute-1 sudo[204941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lihrpcatywbrctqdrsinrejtokxfeolg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977465.5503452-651-81210425115099/AnsiballZ_copy.py'
Nov 24 09:44:26 compute-1 sudo[204941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:26 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:26 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:26 compute-1 python3.9[204943]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977465.5503452-651-81210425115099/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:26 compute-1 sudo[204941]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.003000072s ======
Nov 24 09:44:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:27.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000072s
Nov 24 09:44:27 compute-1 ceph-mon[80009]: pgmap v516: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:44:27 compute-1 sudo[205093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enejirlmqwggfejfvizdjpmhtkivztim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977466.9471145-697-151496269322609/AnsiballZ_command.py'
Nov 24 09:44:27 compute-1 sudo[205093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:27 compute-1 python3.9[205095]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:44:27 compute-1 sudo[205093]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:27.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:27 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:27 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:28 compute-1 sudo[205247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbcsowltgiiylujtilawbdgxpssuxpdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977467.7951643-720-25163161040920/AnsiballZ_lineinfile.py'
Nov 24 09:44:28 compute-1 sudo[205247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:28 compute-1 python3.9[205249]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:28 compute-1 sudo[205247]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:28 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:28 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:28 compute-1 sudo[205399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzeunaqdzadjcklkfatasebdlgwyxtqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977468.491586-744-115690770670995/AnsiballZ_replace.py'
Nov 24 09:44:28 compute-1 sudo[205399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:44:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:29.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:29 compute-1 python3.9[205401]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:29 compute-1 sudo[205399]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:29 compute-1 ceph-mon[80009]: pgmap v517: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:44:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:29 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:44:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:29.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:29 compute-1 sudo[205551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjscpdavyylsnpeyhimtgiwtgfarhfgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977469.3844483-769-106013471911656/AnsiballZ_replace.py'
Nov 24 09:44:29 compute-1 sudo[205551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:29 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:29 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:29 compute-1 python3.9[205553]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:29 compute-1 sudo[205551]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:44:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:44:30 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:30 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:30 compute-1 sudo[205704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoxdmzggibjkoxjkurdvlywavrggltki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977470.2417634-795-50877333343209/AnsiballZ_lineinfile.py'
Nov 24 09:44:30 compute-1 sudo[205704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:30 compute-1 python3.9[205706]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:30 compute-1 sudo[205704]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:31.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:31 compute-1 sudo[205856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfuirumcwkudqyvncjhrwilnafhykjgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977470.8936727-795-167380463418195/AnsiballZ_lineinfile.py'
Nov 24 09:44:31 compute-1 sudo[205856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:31 compute-1 ceph-mon[80009]: pgmap v518: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:44:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:44:31 compute-1 python3.9[205858]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:31 compute-1 sudo[205856]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:31.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:31 compute-1 sudo[206008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyrpfvfcgiwruheaxqzspbpontlgqmby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977471.4636214-795-202703943423866/AnsiballZ_lineinfile.py'
Nov 24 09:44:31 compute-1 sudo[206008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:31 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:31 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:31 compute-1 python3.9[206010]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:31 compute-1 sudo[206008]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:32 compute-1 sudo[206161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecjxyxycjfdvmpywxeevsirxsakixhuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977472.098198-795-70208351321030/AnsiballZ_lineinfile.py'
Nov 24 09:44:32 compute-1 sudo[206161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:32 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:32 compute-1 python3.9[206163]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:32 compute-1 sudo[206161]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:32 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:44:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:32 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:44:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:33.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:33 compute-1 ceph-mon[80009]: pgmap v519: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:44:33 compute-1 sudo[206313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukyhwbbecwysojucevhgdmkplzvtlfdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977473.1357849-882-205538745503867/AnsiballZ_stat.py'
Nov 24 09:44:33 compute-1 sudo[206313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:33 compute-1 python3.9[206315]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:44:33 compute-1 sudo[206313]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:33.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:33 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:33 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:44:34 compute-1 sudo[206468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzqfqjqnmofhksxhucaefcpvkvbfmdya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977473.928882-906-157002950932798/AnsiballZ_file.py'
Nov 24 09:44:34 compute-1 sudo[206468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:34 compute-1 python3.9[206470]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:34 compute-1 sudo[206468]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:34 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:34 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:35.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:35 compute-1 sudo[206620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hentwmndndmhhomttxxijixwwgzxykyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977474.899262-933-63666481142552/AnsiballZ_file.py'
Nov 24 09:44:35 compute-1 sudo[206620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:35 compute-1 ceph-mon[80009]: pgmap v520: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:44:35 compute-1 python3.9[206622]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:44:35 compute-1 sudo[206620]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:35.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:35 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:44:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:35 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:35 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:35 compute-1 sudo[206773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydihpiflqbhcikseejspoooiebnfcbhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977475.6409512-957-626904655095/AnsiballZ_stat.py'
Nov 24 09:44:35 compute-1 sudo[206773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:36 compute-1 python3.9[206775]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:44:36 compute-1 sudo[206773]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:36 compute-1 sudo[206851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uujdytrypcbvrrnaxmtubgfbfshidgtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977475.6409512-957-626904655095/AnsiballZ_file.py'
Nov 24 09:44:36 compute-1 sudo[206851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:36 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:36 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:36 compute-1 python3.9[206853]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:44:36 compute-1 sudo[206851]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:37.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:37 compute-1 sudo[207003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddtewcohwojyklmwhmvhhcvtyddxjgig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977476.8444576-957-123549762440931/AnsiballZ_stat.py'
Nov 24 09:44:37 compute-1 sudo[207003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:37 compute-1 ceph-mon[80009]: pgmap v521: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:44:37 compute-1 python3.9[207005]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:44:37 compute-1 sudo[207003]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:37 compute-1 sudo[207081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuyvbegqhsqyqetsrvbpxgaeigirjrtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977476.8444576-957-123549762440931/AnsiballZ_file.py'
Nov 24 09:44:37 compute-1 sudo[207081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:37.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:37 compute-1 python3.9[207083]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:44:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:37 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:37 compute-1 sudo[207081]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:37 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:38 compute-1 sudo[207234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opwdgiiiyfeuqchblsynhljxfllnaxwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977478.1821887-1026-183212609600875/AnsiballZ_file.py'
Nov 24 09:44:38 compute-1 sudo[207234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:38 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:38 compute-1 python3.9[207236]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:38 compute-1 sudo[207234]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:44:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:39.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:39 compute-1 sudo[207386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iefvaqzzxwugnborudoibbwhcuuebury ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977478.9434-1050-218601698524891/AnsiballZ_stat.py'
Nov 24 09:44:39 compute-1 sudo[207386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:39 compute-1 ceph-mon[80009]: pgmap v522: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:44:39 compute-1 python3.9[207388]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:44:39 compute-1 sudo[207386]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:39.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:39 compute-1 sudo[207464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnlcvvgketzmfvhnllecbwbcljtxzmds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977478.9434-1050-218601698524891/AnsiballZ_file.py'
Nov 24 09:44:39 compute-1 sudo[207464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:39 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:39 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:39 compute-1 python3.9[207466]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:39 compute-1 sudo[207464]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:40 compute-1 sudo[207617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmjxbianccjmukuiwqfdcaxrtlibuxfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977480.1999085-1086-240885631230511/AnsiballZ_stat.py'
Nov 24 09:44:40 compute-1 sudo[207617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:40 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:40 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:40 compute-1 python3.9[207619]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:44:40 compute-1 sudo[207617]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:40 compute-1 sudo[207695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yacfyqgbhdcepbrypghpyjiylvseomnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977480.1999085-1086-240885631230511/AnsiballZ_file.py'
Nov 24 09:44:40 compute-1 sudo[207695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:41.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:41 compute-1 python3.9[207697]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:41 compute-1 sudo[207695]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:41 compute-1 ceph-mon[80009]: pgmap v523: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:44:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094441 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:44:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:41.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:41 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:41 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:41 compute-1 sudo[207847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlmthidaoizzoomkjiuwhftndhlvjqcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977481.5954208-1122-178806726023392/AnsiballZ_systemd.py'
Nov 24 09:44:41 compute-1 sudo[207847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:42 compute-1 python3.9[207849]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:44:42 compute-1 systemd[1]: Reloading.
Nov 24 09:44:42 compute-1 systemd-rc-local-generator[207872]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:44:42 compute-1 systemd-sysv-generator[207876]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:44:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:42 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:42 compute-1 sudo[207847]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:43.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:43 compute-1 sudo[207911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:44:43 compute-1 sudo[207911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:44:43 compute-1 sudo[207911]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:43 compute-1 ceph-mon[80009]: pgmap v524: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:44:43 compute-1 sudo[208061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkewihqxlediicrfbdurmzodowzsrnhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977483.2669077-1146-77466759861831/AnsiballZ_stat.py'
Nov 24 09:44:43 compute-1 sudo[208061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:43.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:43 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:43 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:43 compute-1 python3.9[208063]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:44:43 compute-1 sudo[208061]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:44:44 compute-1 sudo[208142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktfqwcahfmycafshgnyuentgcecxchqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977483.2669077-1146-77466759861831/AnsiballZ_file.py'
Nov 24 09:44:44 compute-1 sudo[208142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:44 compute-1 python3.9[208144]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:44 compute-1 sudo[208142]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:44 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:44 compute-1 sudo[208294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eczilnjvfmhdkqcwtnvtclrmztqykhtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977484.5597186-1182-22371973878747/AnsiballZ_stat.py'
Nov 24 09:44:44 compute-1 sudo[208294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:45 compute-1 python3.9[208296]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:44:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:45.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:45 compute-1 sudo[208294]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:45 compute-1 ceph-mon[80009]: pgmap v525: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:44:45 compute-1 sudo[208372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohrkvwxdgmymzhrznvwdtzzbtaxashxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977484.5597186-1182-22371973878747/AnsiballZ_file.py'
Nov 24 09:44:45 compute-1 sudo[208372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:44:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:44:45 compute-1 python3.9[208374]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:45 compute-1 sudo[208372]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:45.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:45 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:45 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:46 compute-1 sudo[208525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvozxrrskdnefjvumgwkpepzuvwpeqtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977485.8560112-1218-30563227485689/AnsiballZ_systemd.py'
Nov 24 09:44:46 compute-1 sudo[208525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:44:46 compute-1 ceph-mon[80009]: pgmap v526: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:44:46 compute-1 python3.9[208527]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:44:46 compute-1 systemd[1]: Reloading.
Nov 24 09:44:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:46 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003e30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:46 compute-1 systemd-rc-local-generator[208556]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:44:46 compute-1 systemd-sysv-generator[208560]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:44:46 compute-1 systemd[1]: Starting Create netns directory...
Nov 24 09:44:46 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 09:44:46 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 09:44:46 compute-1 systemd[1]: Finished Create netns directory.
Nov 24 09:44:46 compute-1 sudo[208525]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:46 compute-1 podman[208564]: 2025-11-24 09:44:46.874778088 +0000 UTC m=+0.092896595 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 09:44:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:47.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:47 compute-1 sudo[208744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sorrniwhjbmeeezvjostwtcavxwoygfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977487.3359542-1248-218189055309317/AnsiballZ_file.py'
Nov 24 09:44:47 compute-1 sudo[208744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:47.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:47 compute-1 python3.9[208746]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:44:47 compute-1 sudo[208744]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:48 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:48 compute-1 sudo[208897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euoiesuuhmjvhxcdnjznqxasxdsdmtoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977488.2253418-1272-143177971968933/AnsiballZ_stat.py'
Nov 24 09:44:48 compute-1 sudo[208897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:48 compute-1 python3.9[208899]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:44:48 compute-1 ceph-mon[80009]: pgmap v527: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:44:48 compute-1 sudo[208897]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:44:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:49.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:49 compute-1 sudo[209020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pejatkaynzctbrrifatjdibpunjrtsdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977488.2253418-1272-143177971968933/AnsiballZ_copy.py'
Nov 24 09:44:49 compute-1 sudo[209020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:49 compute-1 python3.9[209022]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977488.2253418-1272-143177971968933/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:44:49 compute-1 sudo[209020]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:49.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:49 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003e50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:49 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:50 compute-1 sudo[209173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdsanhgbtchurrsqvepejlpohnubwpdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977489.8263288-1323-127935155078416/AnsiballZ_file.py'
Nov 24 09:44:50 compute-1 sudo[209173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:50 compute-1 python3.9[209175]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:44:50 compute-1 sudo[209173]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:50 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:50 compute-1 ceph-mon[80009]: pgmap v528: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:44:50 compute-1 sudo[209325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rldjwkgytbgmhvayiauqqsmvzihevdmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977490.678763-1347-272991374662609/AnsiballZ_stat.py'
Nov 24 09:44:50 compute-1 sudo[209325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:51.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:51 compute-1 python3.9[209327]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:44:51 compute-1 sudo[209325]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:51 compute-1 sudo[209448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjitpijmehfuvzxyakarwsuuzxvgijmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977490.678763-1347-272991374662609/AnsiballZ_copy.py'
Nov 24 09:44:51 compute-1 sudo[209448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:51 compute-1 python3.9[209450]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977490.678763-1347-272991374662609/.source.json _original_basename=.ydzvx2lm follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:51.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:51 compute-1 sudo[209448]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:51 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:51 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003e70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:52 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:52 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:52 compute-1 sudo[209601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffrestuzneeapgcsaxbthtnndnouzgxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977492.179974-1392-179593880396457/AnsiballZ_file.py'
Nov 24 09:44:52 compute-1 sudo[209601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:52 compute-1 ceph-mon[80009]: pgmap v529: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:44:52 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094452 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:44:52 compute-1 python3.9[209603]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:52 compute-1 sudo[209601]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:53.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:53 compute-1 sudo[209753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncipwkbwyqjpaukmhtyizwywmbbdmzoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977493.1166284-1416-210574257312686/AnsiballZ_stat.py'
Nov 24 09:44:53 compute-1 sudo[209753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:53 compute-1 sudo[209753]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:53.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:53 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:53 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a310 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:54 compute-1 sudo[209877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eymrqnkzpmlojrswpspoobcngpneptbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977493.1166284-1416-210574257312686/AnsiballZ_copy.py'
Nov 24 09:44:54 compute-1 sudo[209877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:44:54 compute-1 sudo[209877]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:54 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:54 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:54 compute-1 ceph-mon[80009]: pgmap v530: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:44:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:55.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:55 compute-1 sudo[210029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmbpmizfahbuzslwhxcihvgppzlxeusm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977494.757649-1467-203401509878095/AnsiballZ_container_config_data.py'
Nov 24 09:44:55 compute-1 sudo[210029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:55 compute-1 python3.9[210031]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 24 09:44:55 compute-1 sudo[210029]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:55.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:55 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00029f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:55 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:56 compute-1 sudo[210195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtxtdboxvwhdvmhxvhyusofouybglarf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977495.8069587-1494-280938264274982/AnsiballZ_container_config_hash.py'
Nov 24 09:44:56 compute-1 sudo[210195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:56 compute-1 podman[210156]: 2025-11-24 09:44:56.249243014 +0000 UTC m=+0.050445127 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:44:56 compute-1 python3.9[210202]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 09:44:56 compute-1 sudo[210195]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:56 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:56 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:56 compute-1 ceph-mon[80009]: pgmap v531: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:44:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:57.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:57 compute-1 sudo[210300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:44:57 compute-1 sudo[210300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:44:57 compute-1 sudo[210300]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:57 compute-1 sudo[210344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:44:57 compute-1 sudo[210344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:44:57 compute-1 sudo[210402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvhchvncgseilneivtlvittsskbddecp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977496.8537452-1521-127021675476029/AnsiballZ_podman_container_info.py'
Nov 24 09:44:57 compute-1 sudo[210402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:57 compute-1 python3.9[210404]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 24 09:44:57 compute-1 sudo[210402]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:57.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:57 compute-1 sudo[210344]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:57 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:57 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00029f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:44:57 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:44:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:44:57 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:44:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:44:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:44:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:44:57 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:44:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:44:57 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:44:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:44:57 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.401454) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977498401517, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1150, "num_deletes": 255, "total_data_size": 2722643, "memory_usage": 2762160, "flush_reason": "Manual Compaction"}
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977498411315, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 1799304, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18681, "largest_seqno": 19826, "table_properties": {"data_size": 1794258, "index_size": 2570, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10170, "raw_average_key_size": 18, "raw_value_size": 1784267, "raw_average_value_size": 3232, "num_data_blocks": 115, "num_entries": 552, "num_filter_entries": 552, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763977399, "oldest_key_time": 1763977399, "file_creation_time": 1763977498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 9969 microseconds, and 4174 cpu microseconds.
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.411429) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 1799304 bytes OK
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.411486) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.412765) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.412779) EVENT_LOG_v1 {"time_micros": 1763977498412776, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.412794) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 2717074, prev total WAL file size 2717074, number of live WAL files 2.
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.413891) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(1757KB)], [33(11MB)]
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977498413950, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 14063685, "oldest_snapshot_seqno": -1}
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 5012 keys, 13582532 bytes, temperature: kUnknown
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977498483554, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 13582532, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13547529, "index_size": 21389, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 127282, "raw_average_key_size": 25, "raw_value_size": 13455256, "raw_average_value_size": 2684, "num_data_blocks": 878, "num_entries": 5012, "num_filter_entries": 5012, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763977498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.483886) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13582532 bytes
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.485246) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.7 rd, 194.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 11.7 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(15.4) write-amplify(7.5) OK, records in: 5536, records dropped: 524 output_compression: NoCompression
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.485262) EVENT_LOG_v1 {"time_micros": 1763977498485255, "job": 18, "event": "compaction_finished", "compaction_time_micros": 69741, "compaction_time_cpu_micros": 25581, "output_level": 6, "num_output_files": 1, "total_output_size": 13582532, "num_input_records": 5536, "num_output_records": 5012, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977498485801, "job": 18, "event": "table_file_deletion", "file_number": 35}
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977498488261, "job": 18, "event": "table_file_deletion", "file_number": 33}
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.413780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.488485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.488493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.488495) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.488497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:44:58 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.488499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:44:58 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:58 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:58 compute-1 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 24 09:44:58 compute-1 ceph-mon[80009]: pgmap v532: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:44:58 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:44:58 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:44:58 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:44:58 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:44:58 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:44:58 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:44:58 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:44:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:44:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:44:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:59.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:44:59 compute-1 sudo[210615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjkqjafhrskbvftpfxlnorfbwvmfjdfo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763977498.7989979-1560-264378887347359/AnsiballZ_edpm_container_manage.py'
Nov 24 09:44:59 compute-1 sudo[210615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:59 compute-1 python3[210617]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 09:44:59 compute-1 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 24 09:44:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:44:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:59.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:59 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a350 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:59 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:45:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:45:00 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:00 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00029f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:00 compute-1 podman[210630]: 2025-11-24 09:45:00.882741856 +0000 UTC m=+1.244656420 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 24 09:45:00 compute-1 ceph-mon[80009]: pgmap v533: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:45:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:45:01 compute-1 podman[210691]: 2025-11-24 09:45:01.029580982 +0000 UTC m=+0.048582321 container create 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 24 09:45:01 compute-1 podman[210691]: 2025-11-24 09:45:01.005323943 +0000 UTC m=+0.024325312 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 24 09:45:01 compute-1 python3[210617]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 24 09:45:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:45:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:01.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:45:01 compute-1 sudo[210615]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:01 compute-1 sudo[210878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymdxbqsjcgghyvuvxtfcdusofsxwhxmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977501.3239858-1584-195878022782308/AnsiballZ_stat.py'
Nov 24 09:45:01 compute-1 sudo[210878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:01.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:01 compute-1 python3.9[210880]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:45:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:01 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:01 compute-1 sudo[210878]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:01 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a370 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:02 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:02 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:45:02 compute-1 sudo[211033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfivyowvvvvihyvkzuutshuixweqafok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977502.2088482-1611-51030898573110/AnsiballZ_file.py'
Nov 24 09:45:02 compute-1 sudo[211033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:45:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:45:02 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:02 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003ef0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:02 compute-1 python3.9[211035]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:02 compute-1 sudo[211036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:45:02 compute-1 sudo[211036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:45:02 compute-1 sudo[211036]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:02 compute-1 sudo[211033]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:02 compute-1 sudo[211134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovvjmgrmdwweqjxkglnckrvjpibjvtqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977502.2088482-1611-51030898573110/AnsiballZ_stat.py'
Nov 24 09:45:02 compute-1 sudo[211134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:02 compute-1 ceph-mon[80009]: pgmap v534: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:02 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:45:02 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:45:03 compute-1 python3.9[211136]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:45:03 compute-1 sudo[211134]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:03.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:03 compute-1 sudo[211188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:45:03 compute-1 sudo[211188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:45:03 compute-1 sudo[211188]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:45:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:03.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:45:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:03 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:03 compute-1 sudo[211310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opfsmkoprmjexfqblaalccrznezknspi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977503.1061673-1611-88957895392269/AnsiballZ_copy.py'
Nov 24 09:45:03 compute-1 sudo[211310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:03 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:04 compute-1 python3.9[211312]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763977503.1061673-1611-88957895392269/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:04 compute-1 sudo[211310]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:45:04 compute-1 sudo[211387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqkvvluhtzxfvkoatdvoqrmptmfwdnew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977503.1061673-1611-88957895392269/AnsiballZ_systemd.py'
Nov 24 09:45:04 compute-1 sudo[211387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:04 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:04 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:04 compute-1 python3.9[211389]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 09:45:04 compute-1 systemd[1]: Reloading.
Nov 24 09:45:04 compute-1 systemd-rc-local-generator[211415]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:45:04 compute-1 systemd-sysv-generator[211418]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:45:04 compute-1 ceph-mon[80009]: pgmap v535: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:04 compute-1 sudo[211387]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:45:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:05.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:45:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:05 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:45:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:05 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:45:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:05 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:45:05 compute-1 sudo[211497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edkyubbnakxhyajojuswbenuzysutpva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977503.1061673-1611-88957895392269/AnsiballZ_systemd.py'
Nov 24 09:45:05 compute-1 sudo[211497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:05 compute-1 python3.9[211499]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:45:05 compute-1 systemd[1]: Reloading.
Nov 24 09:45:05 compute-1 systemd-rc-local-generator[211529]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:45:05 compute-1 systemd-sysv-generator[211533]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:45:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:05.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:05 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003f10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:05 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:05 compute-1 systemd[1]: Starting multipathd container...
Nov 24 09:45:05 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:45:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a72736bce2b1fff6e10002580058dd636e902f41fddbd4e487c2d61fd3699f3/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 09:45:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a72736bce2b1fff6e10002580058dd636e902f41fddbd4e487c2d61fd3699f3/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 09:45:05 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c.
Nov 24 09:45:05 compute-1 podman[211539]: 2025-11-24 09:45:05.990875248 +0000 UTC m=+0.112193792 container init 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3)
Nov 24 09:45:06 compute-1 multipathd[211555]: + sudo -E kolla_set_configs
Nov 24 09:45:06 compute-1 podman[211539]: 2025-11-24 09:45:06.019849913 +0000 UTC m=+0.141168427 container start 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:45:06 compute-1 podman[211539]: multipathd
Nov 24 09:45:06 compute-1 systemd[1]: Started multipathd container.
Nov 24 09:45:06 compute-1 sudo[211561]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 24 09:45:06 compute-1 sudo[211561]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 09:45:06 compute-1 sudo[211561]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 24 09:45:06 compute-1 sudo[211497]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:06 compute-1 multipathd[211555]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 09:45:06 compute-1 multipathd[211555]: INFO:__main__:Validating config file
Nov 24 09:45:06 compute-1 multipathd[211555]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 09:45:06 compute-1 multipathd[211555]: INFO:__main__:Writing out command to execute
Nov 24 09:45:06 compute-1 sudo[211561]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:06 compute-1 multipathd[211555]: ++ cat /run_command
Nov 24 09:45:06 compute-1 multipathd[211555]: + CMD='/usr/sbin/multipathd -d'
Nov 24 09:45:06 compute-1 multipathd[211555]: + ARGS=
Nov 24 09:45:06 compute-1 multipathd[211555]: + sudo kolla_copy_cacerts
Nov 24 09:45:06 compute-1 sudo[211586]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 24 09:45:06 compute-1 sudo[211586]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 09:45:06 compute-1 sudo[211586]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 24 09:45:06 compute-1 sudo[211586]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:06 compute-1 multipathd[211555]: Running command: '/usr/sbin/multipathd -d'
Nov 24 09:45:06 compute-1 multipathd[211555]: + [[ ! -n '' ]]
Nov 24 09:45:06 compute-1 multipathd[211555]: + . kolla_extend_start
Nov 24 09:45:06 compute-1 multipathd[211555]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 24 09:45:06 compute-1 multipathd[211555]: + umask 0022
Nov 24 09:45:06 compute-1 multipathd[211555]: + exec /usr/sbin/multipathd -d
Nov 24 09:45:06 compute-1 podman[211562]: 2025-11-24 09:45:06.11734071 +0000 UTC m=+0.087794288 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 09:45:06 compute-1 multipathd[211555]: 3449.728612 | --------start up--------
Nov 24 09:45:06 compute-1 multipathd[211555]: 3449.728629 | read /etc/multipath.conf
Nov 24 09:45:06 compute-1 systemd[1]: 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c-1f84109cc96ee636.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 09:45:06 compute-1 systemd[1]: 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c-1f84109cc96ee636.service: Failed with result 'exit-code'.
Nov 24 09:45:06 compute-1 multipathd[211555]: 3449.733737 | path checkers start up
Nov 24 09:45:06 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:06 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:06 compute-1 ceph-mon[80009]: pgmap v536: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:07.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:07 compute-1 python3.9[211744]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:45:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:45:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:07.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:45:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:07 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a3b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:07 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:08 compute-1 sudo[211897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytmifxuorjfmwywwtxpcqtgjcrhkzhoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977507.7692976-1719-42673384191217/AnsiballZ_command.py'
Nov 24 09:45:08 compute-1 sudo[211897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:08 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:08 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:45:08 compute-1 python3.9[211899]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:45:08 compute-1 sudo[211897]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:08 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:08 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:08 compute-1 sudo[212062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqzteurzeuqgyhbsvgbcowxinapsgjpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977508.65113-1743-277883838397240/AnsiballZ_systemd.py'
Nov 24 09:45:08 compute-1 sudo[212062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:08 compute-1 ceph-mon[80009]: pgmap v537: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:45:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:45:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:09.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:09 compute-1 python3.9[212064]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:45:09 compute-1 systemd[1]: Stopping multipathd container...
Nov 24 09:45:09 compute-1 multipathd[211555]: 3452.919644 | exit (signal)
Nov 24 09:45:09 compute-1 multipathd[211555]: 3452.919704 | --------shut down-------
Nov 24 09:45:09 compute-1 systemd[1]: libpod-16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c.scope: Deactivated successfully.
Nov 24 09:45:09 compute-1 podman[212068]: 2025-11-24 09:45:09.342647564 +0000 UTC m=+0.073757713 container died 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:45:09 compute-1 systemd[1]: 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c-1f84109cc96ee636.timer: Deactivated successfully.
Nov 24 09:45:09 compute-1 systemd[1]: Stopped /usr/bin/podman healthcheck run 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c.
Nov 24 09:45:09 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c-userdata-shm.mount: Deactivated successfully.
Nov 24 09:45:09 compute-1 systemd[1]: var-lib-containers-storage-overlay-9a72736bce2b1fff6e10002580058dd636e902f41fddbd4e487c2d61fd3699f3-merged.mount: Deactivated successfully.
Nov 24 09:45:09 compute-1 podman[212068]: 2025-11-24 09:45:09.520365923 +0000 UTC m=+0.251476072 container cleanup 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Nov 24 09:45:09 compute-1 podman[212068]: multipathd
Nov 24 09:45:09 compute-1 podman[212097]: multipathd
Nov 24 09:45:09 compute-1 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 24 09:45:09 compute-1 systemd[1]: Stopped multipathd container.
Nov 24 09:45:09 compute-1 systemd[1]: Starting multipathd container...
Nov 24 09:45:09 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:45:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a72736bce2b1fff6e10002580058dd636e902f41fddbd4e487c2d61fd3699f3/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 09:45:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a72736bce2b1fff6e10002580058dd636e902f41fddbd4e487c2d61fd3699f3/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 09:45:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:45:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:09.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:45:09 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c.
Nov 24 09:45:09 compute-1 podman[212110]: 2025-11-24 09:45:09.728514044 +0000 UTC m=+0.101915118 container init 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3)
Nov 24 09:45:09 compute-1 multipathd[212127]: + sudo -E kolla_set_configs
Nov 24 09:45:09 compute-1 sudo[212133]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 24 09:45:09 compute-1 sudo[212133]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 09:45:09 compute-1 sudo[212133]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 24 09:45:09 compute-1 podman[212110]: 2025-11-24 09:45:09.758008262 +0000 UTC m=+0.131409306 container start 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd)
Nov 24 09:45:09 compute-1 podman[212110]: multipathd
Nov 24 09:45:09 compute-1 systemd[1]: Started multipathd container.
Nov 24 09:45:09 compute-1 multipathd[212127]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 09:45:09 compute-1 multipathd[212127]: INFO:__main__:Validating config file
Nov 24 09:45:09 compute-1 multipathd[212127]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 09:45:09 compute-1 multipathd[212127]: INFO:__main__:Writing out command to execute
Nov 24 09:45:09 compute-1 sudo[212133]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:09 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:09 compute-1 sudo[212062]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:09 compute-1 multipathd[212127]: ++ cat /run_command
Nov 24 09:45:09 compute-1 multipathd[212127]: + CMD='/usr/sbin/multipathd -d'
Nov 24 09:45:09 compute-1 multipathd[212127]: + ARGS=
Nov 24 09:45:09 compute-1 multipathd[212127]: + sudo kolla_copy_cacerts
Nov 24 09:45:09 compute-1 sudo[212155]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 24 09:45:09 compute-1 sudo[212155]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 09:45:09 compute-1 sudo[212155]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 24 09:45:09 compute-1 podman[212134]: 2025-11-24 09:45:09.815718887 +0000 UTC m=+0.049172815 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 24 09:45:09 compute-1 sudo[212155]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:09 compute-1 multipathd[212127]: + [[ ! -n '' ]]
Nov 24 09:45:09 compute-1 multipathd[212127]: + . kolla_extend_start
Nov 24 09:45:09 compute-1 multipathd[212127]: Running command: '/usr/sbin/multipathd -d'
Nov 24 09:45:09 compute-1 multipathd[212127]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 24 09:45:09 compute-1 multipathd[212127]: + umask 0022
Nov 24 09:45:09 compute-1 multipathd[212127]: + exec /usr/sbin/multipathd -d
Nov 24 09:45:09 compute-1 systemd[1]: 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c-3e19af4066309da.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 09:45:09 compute-1 systemd[1]: 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c-3e19af4066309da.service: Failed with result 'exit-code'.
Nov 24 09:45:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:09 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:09 compute-1 multipathd[212127]: 3453.443380 | --------start up--------
Nov 24 09:45:09 compute-1 multipathd[212127]: 3453.443398 | read /etc/multipath.conf
Nov 24 09:45:09 compute-1 multipathd[212127]: 3453.447961 | path checkers start up
Nov 24 09:45:10 compute-1 sudo[212318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yghgnlofxwkdzsdmmivlnuubevohzmyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977510.2206774-1767-211162991439339/AnsiballZ_file.py'
Nov 24 09:45:10 compute-1 sudo[212318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:10 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:10 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a3d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:10 compute-1 python3.9[212320]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:10 compute-1 sudo[212318]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:10 compute-1 ceph-mon[80009]: pgmap v538: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:45:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:11.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:11 compute-1 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 24 09:45:11 compute-1 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 24 09:45:11 compute-1 sudo[212472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnuhxhqpnrnoecidpvnkzghrmxpffnlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977511.4312928-1803-156320869600943/AnsiballZ_file.py'
Nov 24 09:45:11 compute-1 sudo[212472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:11.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:11 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:11 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:11 compute-1 python3.9[212474]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 09:45:11 compute-1 sudo[212472]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:12 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:12 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003f90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:12 compute-1 sudo[212625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlrlxclsbhylgbhstwosrbjpuosipjxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977512.2895358-1827-87631034141558/AnsiballZ_modprobe.py'
Nov 24 09:45:12 compute-1 sudo[212625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:12 compute-1 python3.9[212627]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 24 09:45:12 compute-1 kernel: Key type psk registered
Nov 24 09:45:12 compute-1 sudo[212625]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:13 compute-1 ceph-mon[80009]: pgmap v539: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:45:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:45:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:13.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:45:13 compute-1 sudo[212788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uospcwvexdfjvsasekrzqrkprahjrxeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977513.281919-1851-49564879507983/AnsiballZ_stat.py'
Nov 24 09:45:13 compute-1 sudo[212788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:13.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:13 compute-1 python3.9[212790]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:45:13 compute-1 sudo[212788]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:13 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:13 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:45:14 compute-1 sudo[212912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndstaekxwtdkhuwpbserkzvctjvlbloj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977513.281919-1851-49564879507983/AnsiballZ_copy.py'
Nov 24 09:45:14 compute-1 sudo[212912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:14 compute-1 python3.9[212914]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977513.281919-1851-49564879507983/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:14 compute-1 sudo[212912]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:14 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:14 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:14 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094514 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:45:15 compute-1 ceph-mon[80009]: pgmap v540: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:45:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:15 compute-1 sudo[213064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeoruwnoggvqyeugtphjnhscvwxqlaou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977514.8686628-1899-192896443404508/AnsiballZ_lineinfile.py'
Nov 24 09:45:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:15.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:15 compute-1 sudo[213064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:15 compute-1 python3.9[213066]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:15 compute-1 sudo[213064]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:45:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:45:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:45:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:15.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:45:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:15 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003fb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:15 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003fb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:15 compute-1 sudo[213217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrakbwwgnzpfhyfywpmuqjlwzfymmyfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977515.6356652-1923-184104068839973/AnsiballZ_systemd.py'
Nov 24 09:45:15 compute-1 sudo[213217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:45:16 compute-1 python3.9[213219]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:45:16 compute-1 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 24 09:45:16 compute-1 systemd[1]: Stopped Load Kernel Modules.
Nov 24 09:45:16 compute-1 systemd[1]: Stopping Load Kernel Modules...
Nov 24 09:45:16 compute-1 systemd[1]: Starting Load Kernel Modules...
Nov 24 09:45:16 compute-1 systemd[1]: Finished Load Kernel Modules.
Nov 24 09:45:16 compute-1 sudo[213217]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:16 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:17 compute-1 sudo[213385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rggxrirglmuigujbhglusrmhwwfracpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977516.7519543-1947-80562692763426/AnsiballZ_dnf.py'
Nov 24 09:45:17 compute-1 sudo[213385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:17 compute-1 ceph-mon[80009]: pgmap v541: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:45:17 compute-1 podman[213347]: 2025-11-24 09:45:17.096322113 +0000 UTC m=+0.098928395 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 24 09:45:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:45:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:17.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:45:17 compute-1 python3.9[213391]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:45:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:17.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:17 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:17 compute-1 kernel: ganesha.nfsd[202979]: segfault at 50 ip 00007fe4b65f232e sp 00007fe46bffe210 error 4 in libntirpc.so.5.8[7fe4b65d7000+2c000] likely on CPU 6 (core 0, socket 6)
Nov 24 09:45:17 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:45:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:17 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy ignored for local
Nov 24 09:45:17 compute-1 systemd[1]: Started Process Core Dump (PID 213402/UID 0).
Nov 24 09:45:18 compute-1 systemd-coredump[213403]: Process 188320 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 60:
                                                    #0  0x00007fe4b65f232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 24 09:45:18 compute-1 systemd[1]: systemd-coredump@7-213402-0.service: Deactivated successfully.
Nov 24 09:45:18 compute-1 systemd[1]: systemd-coredump@7-213402-0.service: Consumed 1.034s CPU time.
Nov 24 09:45:19 compute-1 podman[213412]: 2025-11-24 09:45:19.018699609 +0000 UTC m=+0.024638870 container died 5b49fd5439277bce674ba48f12338b0bfe0b639e80f257cc26609ecccc449494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:45:19 compute-1 systemd[1]: var-lib-containers-storage-overlay-09197f8c744ecf42ac0ecb6298ca36537bb06006ed0f17d3b474315923d9a2aa-merged.mount: Deactivated successfully.
Nov 24 09:45:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:45:19 compute-1 podman[213412]: 2025-11-24 09:45:19.056330978 +0000 UTC m=+0.062270219 container remove 5b49fd5439277bce674ba48f12338b0bfe0b639e80f257cc26609ecccc449494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:45:19 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:45:19 compute-1 ceph-mon[80009]: pgmap v542: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:45:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:19.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:19 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 09:45:19 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.461s CPU time.
Nov 24 09:45:19 compute-1 systemd[1]: Reloading.
Nov 24 09:45:19 compute-1 systemd-rc-local-generator[213483]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:45:19 compute-1 systemd-sysv-generator[213486]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:45:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:19.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:19 compute-1 systemd[1]: Reloading.
Nov 24 09:45:19 compute-1 systemd-rc-local-generator[213519]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:45:19 compute-1 systemd-sysv-generator[213522]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:45:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:45:20.045 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:45:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:45:20.046 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:45:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:45:20.046 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:45:20 compute-1 systemd-logind[823]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 24 09:45:20 compute-1 systemd-logind[823]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 24 09:45:20 compute-1 lvm[213565]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:45:20 compute-1 lvm[213565]: VG ceph_vg0 finished
Nov 24 09:45:20 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 09:45:20 compute-1 systemd[1]: Starting man-db-cache-update.service...
Nov 24 09:45:20 compute-1 systemd[1]: Reloading.
Nov 24 09:45:20 compute-1 systemd-rc-local-generator[213616]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:45:20 compute-1 systemd-sysv-generator[213621]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:45:20 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 09:45:21 compute-1 ceph-mon[80009]: pgmap v543: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:21.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:21 compute-1 sudo[213385]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:45:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:21.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:45:21 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 09:45:21 compute-1 systemd[1]: Finished man-db-cache-update.service.
Nov 24 09:45:21 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1.545s CPU time.
Nov 24 09:45:21 compute-1 systemd[1]: run-r91a6f304bc3f476eb1b8cd9da6dd4fc4.service: Deactivated successfully.
Nov 24 09:45:22 compute-1 sudo[214905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezpgstwxtpgwlxtuuunhcogqdxewfvax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977522.3519857-1971-94873421516926/AnsiballZ_systemd_service.py'
Nov 24 09:45:22 compute-1 sudo[214905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:22 compute-1 python3.9[214907]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:45:22 compute-1 systemd[1]: Stopping Open-iSCSI...
Nov 24 09:45:22 compute-1 iscsid[202946]: iscsid shutting down.
Nov 24 09:45:22 compute-1 systemd[1]: iscsid.service: Deactivated successfully.
Nov 24 09:45:22 compute-1 systemd[1]: Stopped Open-iSCSI.
Nov 24 09:45:22 compute-1 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 24 09:45:22 compute-1 systemd[1]: Starting Open-iSCSI...
Nov 24 09:45:22 compute-1 systemd[1]: Started Open-iSCSI.
Nov 24 09:45:23 compute-1 sudo[214905]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:23 compute-1 ceph-mon[80009]: pgmap v544: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:23.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:23 compute-1 sudo[214936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:45:23 compute-1 sudo[214936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:45:23 compute-1 sudo[214936]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:23.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094523 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:45:23 compute-1 python3.9[215086]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:45:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:45:24 compute-1 sudo[215241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utjnacckivqilwyueindwiqznyyqjrns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977524.7056792-2023-11994649852492/AnsiballZ_file.py'
Nov 24 09:45:24 compute-1 sudo[215241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:25 compute-1 python3.9[215243]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:25 compute-1 ceph-mon[80009]: pgmap v545: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:45:25 compute-1 sudo[215241]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:25.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:25.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:26 compute-1 sudo[215394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fosgbqdowjaqqzwgaralgpiuenvckxxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977525.8574593-2056-249185265755357/AnsiballZ_systemd_service.py'
Nov 24 09:45:26 compute-1 sudo[215394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:26 compute-1 python3.9[215396]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 09:45:26 compute-1 systemd[1]: Reloading.
Nov 24 09:45:26 compute-1 podman[215398]: 2025-11-24 09:45:26.562152225 +0000 UTC m=+0.055644914 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 24 09:45:26 compute-1 systemd-rc-local-generator[215441]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:45:26 compute-1 systemd-sysv-generator[215444]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:45:26 compute-1 sudo[215394]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:45:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:27.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:45:27 compute-1 ceph-mon[80009]: pgmap v546: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:45:27 compute-1 python3.9[215600]: ansible-ansible.builtin.service_facts Invoked
Nov 24 09:45:27 compute-1 network[215617]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 09:45:27 compute-1 network[215618]: 'network-scripts' will be removed from distribution in near future.
Nov 24 09:45:27 compute-1 network[215619]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 09:45:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:27.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:45:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:29.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:29 compute-1 ceph-mon[80009]: pgmap v547: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:45:29 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 8.
Nov 24 09:45:29 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:45:29 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.461s CPU time.
Nov 24 09:45:29 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:45:29 compute-1 podman[215726]: 2025-11-24 09:45:29.448807975 +0000 UTC m=+0.037989509 container create 7eaa88e9799040652a32b02563e55848c945b8ab0e74738ac38765c3cd6db8d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 09:45:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61d1d105d001d2c150063a1844050105200d289a25968831de4a983fc687f8f/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:45:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61d1d105d001d2c150063a1844050105200d289a25968831de4a983fc687f8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:45:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61d1d105d001d2c150063a1844050105200d289a25968831de4a983fc687f8f/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:45:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61d1d105d001d2c150063a1844050105200d289a25968831de4a983fc687f8f/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:45:29 compute-1 podman[215726]: 2025-11-24 09:45:29.508011877 +0000 UTC m=+0.097193441 container init 7eaa88e9799040652a32b02563e55848c945b8ab0e74738ac38765c3cd6db8d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 24 09:45:29 compute-1 podman[215726]: 2025-11-24 09:45:29.514794935 +0000 UTC m=+0.103976479 container start 7eaa88e9799040652a32b02563e55848c945b8ab0e74738ac38765c3cd6db8d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:45:29 compute-1 bash[215726]: 7eaa88e9799040652a32b02563e55848c945b8ab0e74738ac38765c3cd6db8d2
Nov 24 09:45:29 compute-1 podman[215726]: 2025-11-24 09:45:29.43238684 +0000 UTC m=+0.021568384 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:45:29 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:45:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:29 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:45:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:29 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:45:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:29 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:45:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:29 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:45:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:29 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:45:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:29 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:45:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:29 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:45:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:29 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:45:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:29.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:45:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:45:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:45:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:31.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:45:31 compute-1 ceph-mon[80009]: pgmap v548: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:45:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:45:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:45:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:31.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:45:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:33.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:33 compute-1 ceph-mon[80009]: pgmap v549: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:33 compute-1 sudo[215995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvwolzrkjoktfaofsxvmjrntfzktymdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977533.2579067-2113-50728053741630/AnsiballZ_systemd_service.py'
Nov 24 09:45:33 compute-1 sudo[215995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:33.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:33 compute-1 python3.9[215997]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:45:33 compute-1 sudo[215995]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:45:34 compute-1 sudo[216149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilyzguetektakwcmqoqvyuengcotcafq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977533.9663963-2113-273523454410818/AnsiballZ_systemd_service.py'
Nov 24 09:45:34 compute-1 sudo[216149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:34 compute-1 python3.9[216151]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:45:34 compute-1 sudo[216149]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:34 compute-1 sudo[216302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqrluxonfzdsnoxqblazdowsjrbuyijz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977534.6797516-2113-138141712484935/AnsiballZ_systemd_service.py'
Nov 24 09:45:34 compute-1 sudo[216302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:35.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:35 compute-1 ceph-mon[80009]: pgmap v550: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:35 compute-1 python3.9[216304]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:45:35 compute-1 sudo[216302]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:35 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:45:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:35 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:45:35 compute-1 sudo[216455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njvgrmopkydvyrhtzzbybjabmdsdrsub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977535.4578702-2113-143945098493444/AnsiballZ_systemd_service.py'
Nov 24 09:45:35 compute-1 sudo[216455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:35.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:36 compute-1 python3.9[216457]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:45:36 compute-1 sudo[216455]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:36 compute-1 sudo[216609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjuymvylnjxgdldncmlveimysmlkhmjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977536.1539726-2113-70850569653135/AnsiballZ_systemd_service.py'
Nov 24 09:45:36 compute-1 sudo[216609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:36 compute-1 python3.9[216611]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:45:36 compute-1 sudo[216609]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:37.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:37 compute-1 sudo[216762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqticqmdqbtpncvokoljbsywnaunnwzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977536.921375-2113-171728739324505/AnsiballZ_systemd_service.py'
Nov 24 09:45:37 compute-1 sudo[216762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:37 compute-1 ceph-mon[80009]: pgmap v551: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:37 compute-1 python3.9[216764]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:45:37 compute-1 sudo[216762]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:37.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:37 compute-1 sudo[216916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmjtkwhdpooxuhypsqozktlborudatgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977537.6517286-2113-126282925906645/AnsiballZ_systemd_service.py'
Nov 24 09:45:37 compute-1 sudo[216916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:38 compute-1 python3.9[216918]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:45:38 compute-1 sudo[216916]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:38 compute-1 sudo[217069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojrzdscgnrbjumohqmbeazvfhuddsase ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977538.361701-2113-2257858216029/AnsiballZ_systemd_service.py'
Nov 24 09:45:38 compute-1 sudo[217069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:38 compute-1 python3.9[217071]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:45:38 compute-1 sudo[217069]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:45:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:45:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:39.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:45:39 compute-1 ceph-mon[80009]: pgmap v552: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:45:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:39.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:40 compute-1 podman[217098]: 2025-11-24 09:45:40.318897279 +0000 UTC m=+0.053744419 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 09:45:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:45:41 compute-1 sudo[217244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbyukccjuzhinpmlonohqxvhoggdrjgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977540.8777442-2291-255881826645664/AnsiballZ_file.py'
Nov 24 09:45:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:41.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:45:41 compute-1 sudo[217244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:41 compute-1 ceph-mon[80009]: pgmap v553: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:45:41 compute-1 python3.9[217246]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:41 compute-1 sudo[217244]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:45:41 compute-1 sudo[217408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sphidtpbmwgznokushjesrctnkvscnym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977541.4794273-2291-106891229010479/AnsiballZ_file.py'
Nov 24 09:45:41 compute-1 sudo[217408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:41.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c74000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c680014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:41 compute-1 python3.9[217410]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:41 compute-1 sudo[217408]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:42 compute-1 sudo[217564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewbinbvywptmzhjtzzlcxiouccjxsxfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977542.0574126-2291-268622892170101/AnsiballZ_file.py'
Nov 24 09:45:42 compute-1 sudo[217564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:42 compute-1 python3.9[217566]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:42 compute-1 sudo[217564]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:42 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c50000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:42 compute-1 sudo[217716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekqnualfzvipfpmgmpnlmpiviuvtgxnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977542.6136515-2291-89742349212148/AnsiballZ_file.py'
Nov 24 09:45:42 compute-1 sudo[217716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:43 compute-1 python3.9[217718]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:43 compute-1 sudo[217716]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:45:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:43.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:45:43 compute-1 sudo[217844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:45:43 compute-1 sudo[217891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txjqupdwlvklzntplxprybthyzdjclpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977543.1648438-2291-140136011015900/AnsiballZ_file.py'
Nov 24 09:45:43 compute-1 sudo[217844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:45:43 compute-1 sudo[217891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:43 compute-1 sudo[217844]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:43 compute-1 ceph-mon[80009]: pgmap v554: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:45:43 compute-1 python3.9[217895]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:43 compute-1 sudo[217891]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:45:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:43.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:45:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:43 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c48000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094543 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:45:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:43 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c54000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:43 compute-1 sudo[218046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhocyeljnlfflnxoqjnegtieawuellhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977543.7409382-2291-252513688180445/AnsiballZ_file.py'
Nov 24 09:45:43 compute-1 sudo[218046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:45:44 compute-1 python3.9[218048]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:44 compute-1 sudo[218046]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:44 compute-1 ceph-mon[80009]: pgmap v555: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:45:44 compute-1 sudo[218198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuhvtxfcipiilsdwwtqwiguvouzbkiei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977544.3000648-2291-251209598417775/AnsiballZ_file.py'
Nov 24 09:45:44 compute-1 sudo[218198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:44 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c680021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:44 compute-1 python3.9[218200]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:44 compute-1 sudo[218198]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:45 compute-1 sudo[218350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlkmplsbvshmxqmqqkutkitwgmyvassh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977544.8574302-2291-204141334892996/AnsiballZ_file.py'
Nov 24 09:45:45 compute-1 sudo[218350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:45:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:45.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:45:45 compute-1 python3.9[218352]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:45 compute-1 sudo[218350]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:45:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:45:45 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:45:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:45.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:45 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:45 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c480016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:46 compute-1 ceph-mon[80009]: pgmap v556: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:45:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:46 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c54001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:47 compute-1 sudo[218503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loteasnfixovhuocfjbdfecgrxnifobv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977546.8619237-2461-238930303422614/AnsiballZ_file.py'
Nov 24 09:45:47 compute-1 sudo[218503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:45:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:47.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:45:47 compute-1 podman[218505]: 2025-11-24 09:45:47.245568213 +0000 UTC m=+0.094584957 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:45:47 compute-1 python3.9[218506]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:47 compute-1 sudo[218503]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:47 compute-1 sudo[218682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poazhelqeulehnegkglfjjaxcbhukkca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977547.5065176-2461-37148727819150/AnsiballZ_file.py'
Nov 24 09:45:47 compute-1 sudo[218682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:47.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:47 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c680021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:47 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:47 compute-1 python3.9[218684]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:47 compute-1 sudo[218682]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:48 compute-1 sudo[218835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcllhysyysyjyaajatfukdfjysuufmhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977548.0992224-2461-238015155645976/AnsiballZ_file.py'
Nov 24 09:45:48 compute-1 sudo[218835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:48 compute-1 python3.9[218837]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:48 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:48 compute-1 sudo[218835]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:48 compute-1 ceph-mon[80009]: pgmap v557: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:45:48 compute-1 sudo[218987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmqjmswdywjqcuhvstknkaoxpleavepb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977548.682769-2461-73694293286655/AnsiballZ_file.py'
Nov 24 09:45:48 compute-1 sudo[218987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:45:49 compute-1 python3.9[218989]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:49 compute-1 sudo[218987]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:49.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:49 compute-1 sudo[219139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwduvngkxjuuqhpctqpbrxukvfsislre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977549.2688327-2461-241577556961495/AnsiballZ_file.py'
Nov 24 09:45:49 compute-1 sudo[219139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:49 compute-1 python3.9[219141]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:49 compute-1 sudo[219139]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:49.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:49 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:49 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c54001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:50 compute-1 sudo[219292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdlwjsceuqygiscypxwzmvrptvsrlagh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977549.830154-2461-185979869194327/AnsiballZ_file.py'
Nov 24 09:45:50 compute-1 sudo[219292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:50 compute-1 python3.9[219294]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:50 compute-1 sudo[219292]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:50 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c68002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:50 compute-1 sudo[219444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikzhjgmdinifekqtnrmygtqjajasaddn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977550.4612863-2461-110884701588298/AnsiballZ_file.py'
Nov 24 09:45:50 compute-1 sudo[219444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:50 compute-1 ceph-mon[80009]: pgmap v558: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:50 compute-1 python3.9[219446]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:50 compute-1 sudo[219444]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:51.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:51 compute-1 sudo[219596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuxfibxonapnaqroptxylwwuttlaebct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977551.0540733-2461-191605424037746/AnsiballZ_file.py'
Nov 24 09:45:51 compute-1 sudo[219596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:51 compute-1 python3.9[219598]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:51 compute-1 sudo[219596]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:51.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:51 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c48001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:51 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:52 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:52 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c54001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:52 compute-1 ceph-mon[80009]: pgmap v559: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:52 compute-1 sudo[219749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzhluhomtbbilwefwtnlgszillpymshb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977552.5795257-2635-132110228493097/AnsiballZ_command.py'
Nov 24 09:45:52 compute-1 sudo[219749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:53 compute-1 python3.9[219751]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:45:53 compute-1 sudo[219749]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:53.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:53.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:53 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c68002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:53 compute-1 kernel: ganesha.nfsd[217413]: segfault at 50 ip 00007f5d1cff032e sp 00007f5cd17f9210 error 4 in libntirpc.so.5.8[7f5d1cfd5000+2c000] likely on CPU 1 (core 0, socket 1)
Nov 24 09:45:53 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:45:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:53 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c48001fc0 fd 38 proxy ignored for local
Nov 24 09:45:53 compute-1 python3.9[219903]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 09:45:53 compute-1 systemd[1]: Started Process Core Dump (PID 219904/UID 0).
Nov 24 09:45:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:45:54 compute-1 sudo[220056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybgstimdemaidonqonshinciiwqmckud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977554.3055484-2689-72181855889883/AnsiballZ_systemd_service.py'
Nov 24 09:45:54 compute-1 sudo[220056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:54 compute-1 ceph-mon[80009]: pgmap v560: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:45:54 compute-1 python3.9[220058]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 09:45:54 compute-1 systemd[1]: Reloading.
Nov 24 09:45:54 compute-1 systemd-rc-local-generator[220087]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:45:54 compute-1 systemd-sysv-generator[220091]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:45:55 compute-1 systemd-coredump[219906]: Process 215750 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 54:
                                                    #0  0x00007f5d1cff032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 24 09:45:55 compute-1 systemd[1]: systemd-coredump@8-219904-0.service: Deactivated successfully.
Nov 24 09:45:55 compute-1 systemd[1]: systemd-coredump@8-219904-0.service: Consumed 1.192s CPU time.
Nov 24 09:45:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:55.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:55 compute-1 podman[220097]: 2025-11-24 09:45:55.196727137 +0000 UTC m=+0.033202332 container died 7eaa88e9799040652a32b02563e55848c945b8ab0e74738ac38765c3cd6db8d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 24 09:45:55 compute-1 sudo[220056]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:55 compute-1 systemd[1]: var-lib-containers-storage-overlay-f61d1d105d001d2c150063a1844050105200d289a25968831de4a983fc687f8f-merged.mount: Deactivated successfully.
Nov 24 09:45:55 compute-1 podman[220097]: 2025-11-24 09:45:55.234678036 +0000 UTC m=+0.071153221 container remove 7eaa88e9799040652a32b02563e55848c945b8ab0e74738ac38765c3cd6db8d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 24 09:45:55 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:45:55 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 09:45:55 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.432s CPU time.
Nov 24 09:45:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:55.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:55 compute-1 sudo[220293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfemkslkzyoeztnleuypfpdcvcmlhvxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977555.5140183-2713-96153266580470/AnsiballZ_command.py'
Nov 24 09:45:55 compute-1 sudo[220293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:55 compute-1 python3.9[220295]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:45:56 compute-1 sudo[220293]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:56 compute-1 sudo[220447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uphnwlneptkyuzcqabkiottajwwriazv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977556.156676-2713-29007238264630/AnsiballZ_command.py'
Nov 24 09:45:56 compute-1 sudo[220447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:56 compute-1 python3.9[220449]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:45:56 compute-1 sudo[220447]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:56 compute-1 ceph-mon[80009]: pgmap v561: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:45:57 compute-1 sudo[220613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlxfakxnzynshoypcbaixjyxdbkgecjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977556.7455943-2713-84363055705447/AnsiballZ_command.py'
Nov 24 09:45:57 compute-1 sudo[220613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:57 compute-1 podman[220574]: 2025-11-24 09:45:57.04784005 +0000 UTC m=+0.076110745 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 09:45:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:57.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:57 compute-1 python3.9[220621]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:45:57 compute-1 sudo[220613]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:57 compute-1 sudo[220772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzjeodwuiykssbchzzdptbxvxnrbjsbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977557.335435-2713-45846153088883/AnsiballZ_command.py'
Nov 24 09:45:57 compute-1 sudo[220772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:57.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:57 compute-1 python3.9[220774]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:45:57 compute-1 sudo[220772]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:58 compute-1 sudo[220926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyhzybnqoajfkkvmfxcdxsabrbscskft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977557.938415-2713-18713614525862/AnsiballZ_command.py'
Nov 24 09:45:58 compute-1 sudo[220926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:58 compute-1 python3.9[220928]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:45:58 compute-1 sudo[220926]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:58 compute-1 ceph-mon[80009]: pgmap v562: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:45:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:45:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:45:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:59.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:45:59 compute-1 sudo[221079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcybytwbeopurtmhllyscprycacdzjvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977558.4947822-2713-144062055578370/AnsiballZ_command.py'
Nov 24 09:45:59 compute-1 sudo[221079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:59 compute-1 python3.9[221081]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:45:59 compute-1 sudo[221079]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:45:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:59.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094559 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:45:59 compute-1 sudo[221233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqamxoviqciwjiqvqrpfzgjdjjvcapnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977559.6576362-2713-156099557772828/AnsiballZ_command.py'
Nov 24 09:45:59 compute-1 sudo[221233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:00 compute-1 python3.9[221235]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:46:00 compute-1 sudo[221233]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:46:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:46:00 compute-1 sudo[221386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdmlinwxwapzvzhbdwfdeofiuzweyfpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977560.2538545-2713-72140881970565/AnsiballZ_command.py'
Nov 24 09:46:00 compute-1 sudo[221386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:00 compute-1 python3.9[221388]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:46:00 compute-1 sudo[221386]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:00 compute-1 ceph-mon[80009]: pgmap v563: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:46:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:46:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:01.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094601 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:46:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:01.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:02 compute-1 sudo[221540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdburqpsgpjshpwrhkxirgnddevjmmim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977562.2917986-2920-254619929781755/AnsiballZ_file.py'
Nov 24 09:46:02 compute-1 sudo[221540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:02 compute-1 python3.9[221542]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:02 compute-1 sudo[221540]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:02 compute-1 sudo[221567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:46:02 compute-1 sudo[221567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:46:02 compute-1 sudo[221567]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:02 compute-1 sudo[221615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:46:02 compute-1 sudo[221615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:46:02 compute-1 ceph-mon[80009]: pgmap v564: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:46:03 compute-1 sudo[221744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgvjvayesnnfordspxpvgadnpgwqease ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977562.85542-2920-98994155711973/AnsiballZ_file.py'
Nov 24 09:46:03 compute-1 sudo[221744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:03.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:03 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:46:03 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:46:03 compute-1 python3.9[221753]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:03 compute-1 sudo[221744]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:03 compute-1 sudo[221615]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:03 compute-1 sudo[221852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:46:03 compute-1 sudo[221852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:46:03 compute-1 sudo[221852]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:03 compute-1 sudo[221950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebkidbylpwqwjweutfmszrqikavalpfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977563.4466336-2920-184842789597777/AnsiballZ_file.py'
Nov 24 09:46:03 compute-1 sudo[221950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:46:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:03.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:46:03 compute-1 python3.9[221952]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:03 compute-1 sudo[221950]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:03 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:46:03 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:46:03 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:46:03 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:46:03 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:46:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:46:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:46:04 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:46:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:46:04 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:46:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:46:04 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:46:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:46:04 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:46:04 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:46:04 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:46:04 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:46:04 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:46:04 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:46:04 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:46:04 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:46:04 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:46:04 compute-1 sudo[222103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgpjqihukgggjlsuokoqgghpfujauikt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977564.5428004-2986-246301190968091/AnsiballZ_file.py'
Nov 24 09:46:04 compute-1 sudo[222103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:04 compute-1 python3.9[222105]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:04 compute-1 sudo[222103]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:05.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:05 compute-1 sudo[222255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-strigibjsvgndsuavqpjopespjowmawx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977565.1013572-2986-209898131311987/AnsiballZ_file.py'
Nov 24 09:46:05 compute-1 sudo[222255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:05 compute-1 ceph-mon[80009]: pgmap v565: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:46:05 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 9.
Nov 24 09:46:05 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:46:05 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.432s CPU time.
Nov 24 09:46:05 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:46:05 compute-1 python3.9[222257]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:05 compute-1 sudo[222255]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:05 compute-1 podman[222329]: 2025-11-24 09:46:05.706458109 +0000 UTC m=+0.039882659 container create 8bf9bf8a7bbfa60aa69b766603edc552d95a53eeb4b60bc9ac8acd9912952fb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 09:46:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2de1769c175f32e837a1c6e5836b2db53c7621b3262e7d6f7d0028b6faa381/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:46:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2de1769c175f32e837a1c6e5836b2db53c7621b3262e7d6f7d0028b6faa381/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:46:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2de1769c175f32e837a1c6e5836b2db53c7621b3262e7d6f7d0028b6faa381/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:46:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2de1769c175f32e837a1c6e5836b2db53c7621b3262e7d6f7d0028b6faa381/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:46:05 compute-1 podman[222329]: 2025-11-24 09:46:05.758395537 +0000 UTC m=+0.091820107 container init 8bf9bf8a7bbfa60aa69b766603edc552d95a53eeb4b60bc9ac8acd9912952fb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:46:05 compute-1 podman[222329]: 2025-11-24 09:46:05.764624673 +0000 UTC m=+0.098049223 container start 8bf9bf8a7bbfa60aa69b766603edc552d95a53eeb4b60bc9ac8acd9912952fb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 09:46:05 compute-1 bash[222329]: 8bf9bf8a7bbfa60aa69b766603edc552d95a53eeb4b60bc9ac8acd9912952fb5
Nov 24 09:46:05 compute-1 podman[222329]: 2025-11-24 09:46:05.688764866 +0000 UTC m=+0.022189436 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:46:05 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:46:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:05 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:46:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:05 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:46:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:05.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:05 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:46:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:05 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:46:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:05 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:46:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:05 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:46:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:05 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:46:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:05 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:46:05 compute-1 sudo[222512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfpdtvfohbelfdeslocdpnnxtnykwbah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977565.7041512-2986-134248194750580/AnsiballZ_file.py'
Nov 24 09:46:05 compute-1 sudo[222512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:06 compute-1 python3.9[222514]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:06 compute-1 sudo[222512]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:06 compute-1 ceph-mon[80009]: pgmap v566: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:46:06 compute-1 sudo[222664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwogntwjcmvlpshggkjtevatbschmuwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977566.2721334-2986-154764307799469/AnsiballZ_file.py'
Nov 24 09:46:06 compute-1 sudo[222664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:06 compute-1 python3.9[222666]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:06 compute-1 sudo[222664]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:07 compute-1 sudo[222816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htdpkriktecujxppupcvlgnpzkxgtiil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977566.8379643-2986-170862031878789/AnsiballZ_file.py'
Nov 24 09:46:07 compute-1 sudo[222816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:07.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:07 compute-1 python3.9[222818]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:07 compute-1 sudo[222816]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:07 compute-1 sudo[222968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mctktucnpmjotoyewajtsyzlimitynqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977567.4160943-2986-53712452582506/AnsiballZ_file.py'
Nov 24 09:46:07 compute-1 sudo[222968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:07.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:07 compute-1 python3.9[222970]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:07 compute-1 sudo[222968]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:08 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:46:08 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:46:08 compute-1 sudo[223121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhradjfmdlkdbmroswyxpcjfkmezrmkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977568.0984228-2986-124887594006824/AnsiballZ_file.py'
Nov 24 09:46:08 compute-1 sudo[223121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:08 compute-1 sudo[223124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:46:08 compute-1 sudo[223124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:46:08 compute-1 sudo[223124]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:08 compute-1 python3.9[223123]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:08 compute-1 sudo[223121]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:08 compute-1 ceph-mon[80009]: pgmap v567: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:46:08 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:46:08 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:46:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:46:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:09.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:09.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:10 compute-1 ceph-mon[80009]: pgmap v568: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:46:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:11.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:11 compute-1 podman[223174]: 2025-11-24 09:46:11.27143396 +0000 UTC m=+0.051208981 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 09:46:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:11.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:11 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:46:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:11 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:46:12 compute-1 ceph-mon[80009]: pgmap v569: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Nov 24 09:46:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:13.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:13.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:46:14 compute-1 sudo[223321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbiccxakwhhkbgqyoowcdkgiiykfibkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977574.1354973-3311-280253631703175/AnsiballZ_getent.py'
Nov 24 09:46:14 compute-1 sudo[223321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:14 compute-1 python3.9[223323]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 24 09:46:14 compute-1 sudo[223321]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:14 compute-1 ceph-mon[80009]: pgmap v570: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:46:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:15 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:46:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:15 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:46:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:15 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:46:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:15 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:46:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:15.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:46:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:46:15 compute-1 sudo[223474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osbguaulbxzfdgsbwbilapgokyfsbndz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977575.0410185-3335-62853127197307/AnsiballZ_group.py'
Nov 24 09:46:15 compute-1 sudo[223474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:15 compute-1 python3.9[223476]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 09:46:15 compute-1 groupadd[223477]: group added to /etc/group: name=nova, GID=42436
Nov 24 09:46:15 compute-1 groupadd[223477]: group added to /etc/gshadow: name=nova
Nov 24 09:46:15 compute-1 groupadd[223477]: new group: name=nova, GID=42436
Nov 24 09:46:15 compute-1 sudo[223474]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:15.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:15 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Nov 24 09:46:15 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:15.988710) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:46:15 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Nov 24 09:46:15 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977575988762, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1004, "num_deletes": 251, "total_data_size": 2407403, "memory_usage": 2444480, "flush_reason": "Manual Compaction"}
Nov 24 09:46:15 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Nov 24 09:46:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977576001443, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 1553648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19831, "largest_seqno": 20830, "table_properties": {"data_size": 1549113, "index_size": 2187, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10009, "raw_average_key_size": 19, "raw_value_size": 1540015, "raw_average_value_size": 3013, "num_data_blocks": 98, "num_entries": 511, "num_filter_entries": 511, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763977498, "oldest_key_time": 1763977498, "file_creation_time": 1763977575, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 12778 microseconds, and 4650 cpu microseconds.
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.001495) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 1553648 bytes OK
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.001516) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.003032) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.003052) EVENT_LOG_v1 {"time_micros": 1763977576003048, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.003068) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 2402380, prev total WAL file size 2402380, number of live WAL files 2.
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.003837) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(1517KB)], [36(12MB)]
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977576003861, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 15136180, "oldest_snapshot_seqno": -1}
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 5007 keys, 12955061 bytes, temperature: kUnknown
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977576077870, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 12955061, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12920672, "index_size": 20775, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 127779, "raw_average_key_size": 25, "raw_value_size": 12828912, "raw_average_value_size": 2562, "num_data_blocks": 851, "num_entries": 5007, "num_filter_entries": 5007, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763977576, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.078075) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 12955061 bytes
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.079227) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 204.3 rd, 174.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 13.0 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(18.1) write-amplify(8.3) OK, records in: 5523, records dropped: 516 output_compression: NoCompression
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.079242) EVENT_LOG_v1 {"time_micros": 1763977576079235, "job": 20, "event": "compaction_finished", "compaction_time_micros": 74073, "compaction_time_cpu_micros": 24410, "output_level": 6, "num_output_files": 1, "total_output_size": 12955061, "num_input_records": 5523, "num_output_records": 5007, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977576079672, "job": 20, "event": "table_file_deletion", "file_number": 38}
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977576082256, "job": 20, "event": "table_file_deletion", "file_number": 36}
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.003752) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.082371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.082377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.082378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.082379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:46:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.082381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:46:16 compute-1 sudo[223633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrfoawuwoubbsspkxuzlmtfuzghucljk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977576.1245985-3359-48867852019271/AnsiballZ_user.py'
Nov 24 09:46:16 compute-1 sudo[223633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:16 compute-1 python3.9[223635]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 09:46:16 compute-1 useradd[223637]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Nov 24 09:46:16 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 09:46:16 compute-1 useradd[223637]: add 'nova' to group 'libvirt'
Nov 24 09:46:16 compute-1 useradd[223637]: add 'nova' to shadow group 'libvirt'
Nov 24 09:46:16 compute-1 sudo[223633]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:17 compute-1 ceph-mon[80009]: pgmap v571: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:46:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:17.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:17.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:46:18 compute-1 sshd-session[223682]: Accepted publickey for zuul from 192.168.122.30 port 56432 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:46:18 compute-1 systemd-logind[823]: New session 54 of user zuul.
Nov 24 09:46:18 compute-1 systemd[1]: Started Session 54 of User zuul.
Nov 24 09:46:18 compute-1 sshd-session[223682]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:46:18 compute-1 podman[223684]: 2025-11-24 09:46:18.350848038 +0000 UTC m=+0.084976936 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 24 09:46:18 compute-1 sshd-session[223702]: Received disconnect from 192.168.122.30 port 56432:11: disconnected by user
Nov 24 09:46:18 compute-1 sshd-session[223702]: Disconnected from user zuul 192.168.122.30 port 56432
Nov 24 09:46:18 compute-1 sshd-session[223682]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:46:18 compute-1 systemd[1]: session-54.scope: Deactivated successfully.
Nov 24 09:46:18 compute-1 systemd-logind[823]: Session 54 logged out. Waiting for processes to exit.
Nov 24 09:46:18 compute-1 systemd-logind[823]: Removed session 54.
Nov 24 09:46:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c94000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:19 compute-1 python3.9[223866]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:46:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:46:19 compute-1 ceph-mon[80009]: pgmap v572: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Nov 24 09:46:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:19.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:19 compute-1 python3.9[223987]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977578.633077-3434-241421181591975/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:19.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:19 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:19 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c70000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:46:20.047 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:46:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:46:20.048 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:46:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:46:20.048 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:46:20 compute-1 python3.9[224138]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:46:20 compute-1 python3.9[224214]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:20 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:20 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c94000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:20 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094620 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:46:20 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [ALERT] 327/094620 (4) : backend 'backend' has no server available!
Nov 24 09:46:21 compute-1 python3.9[224364]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:46:21 compute-1 ceph-mon[80009]: pgmap v573: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:46:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:21.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094621 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:46:21 compute-1 python3.9[224485]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977580.6456141-3434-197341046069838/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:21.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:21 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c78000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094621 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:46:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:21 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:22 compute-1 python3.9[224636]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:46:22 compute-1 python3.9[224757]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977581.7029486-3434-197795321611245/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=bc7f3bb7d4094c596a18178a888511b54e157ba4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:22 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:22 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:23 compute-1 python3.9[224907]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:46:23 compute-1 ceph-mon[80009]: pgmap v574: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 4 op/s
Nov 24 09:46:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:23.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:23 compute-1 python3.9[225028]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977582.7327955-3434-97429587186870/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:23 compute-1 sudo[225029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:46:23 compute-1 sudo[225029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:46:23 compute-1 sudo[225029]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:46:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:23.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:46:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:23 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c94000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:23 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c94000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:46:24 compute-1 python3.9[225204]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:46:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:24 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c700016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:24 compute-1 python3.9[225325]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977583.7848728-3434-17790481739851/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:25.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:25 compute-1 ceph-mon[80009]: pgmap v575: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Nov 24 09:46:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:25.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:25 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:25 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c94000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:26 compute-1 sudo[225476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmbypxqbwyssesfwsgxamikulpsazaky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977586.1456857-3683-129590770182123/AnsiballZ_file.py'
Nov 24 09:46:26 compute-1 sudo[225476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:26 compute-1 python3.9[225478]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:46:26 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:26 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c78001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:26 compute-1 sudo[225476]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:27.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:27 compute-1 ceph-mon[80009]: pgmap v576: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Nov 24 09:46:27 compute-1 podman[225602]: 2025-11-24 09:46:27.260388423 +0000 UTC m=+0.057466068 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 24 09:46:27 compute-1 sudo[225645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwlbeuqoqzjggfwyaeluqjrliubxwxdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977586.9683104-3708-117366373462528/AnsiballZ_copy.py'
Nov 24 09:46:27 compute-1 sudo[225645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:27 compute-1 python3.9[225649]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:46:27 compute-1 sudo[225645]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:27.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:27 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c70001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:27 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:27 compute-1 sudo[225800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zngfrrcebqilufowmlbsduhchxjsqdpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977587.74149-3731-210807458044219/AnsiballZ_stat.py'
Nov 24 09:46:27 compute-1 sudo[225800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:28 compute-1 python3.9[225802]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:46:28 compute-1 sudo[225800]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:28 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:28 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c940095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:28 compute-1 sudo[225952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfzvdryxtqtjvnnfblkqqzcomubzsrwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977588.4558976-3755-33318972044668/AnsiballZ_stat.py'
Nov 24 09:46:28 compute-1 sudo[225952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:28 compute-1 python3.9[225954]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:46:28 compute-1 sudo[225952]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:46:29 compute-1 sudo[226075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxlyhmwudtkvkatmxvjajzjzsfplrckz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977588.4558976-3755-33318972044668/AnsiballZ_copy.py'
Nov 24 09:46:29 compute-1 sudo[226075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:29.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:29 compute-1 ceph-mon[80009]: pgmap v577: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Nov 24 09:46:29 compute-1 python3.9[226077]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1763977588.4558976-3755-33318972044668/.source _original_basename=.z_tnyq5w follow=False checksum=f245e5d71a28845d8f9ab8777612e6084ea6ae5a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 24 09:46:29 compute-1 sudo[226075]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:29.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:29 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c78002470 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:29 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c70001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:30 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:30 : epoch 6924295d : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:46:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:46:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:46:30 compute-1 python3.9[226230]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:46:30 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:30 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:31.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:31 compute-1 ceph-mon[80009]: pgmap v578: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:46:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:46:31 compute-1 python3.9[226382]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:46:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094631 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:46:31 compute-1 python3.9[226503]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977590.9191883-3833-175940409423554/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:31.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:31 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c940095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:31 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c78002470 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:32 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c70001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:32 compute-1 python3.9[226654]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:46:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:33 : epoch 6924295d : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:46:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:33 : epoch 6924295d : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:46:33 compute-1 python3.9[226775]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977592.212074-3878-70969304307777/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:33.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:33 compute-1 ceph-mon[80009]: pgmap v579: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Nov 24 09:46:33 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 09:46:33 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 6909 writes, 27K keys, 6909 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6909 writes, 1355 syncs, 5.10 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 485 writes, 766 keys, 485 commit groups, 1.0 writes per commit group, ingest: 0.25 MB, 0.00 MB/s
                                           Interval WAL: 485 writes, 231 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 09:46:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:33.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:33 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:33 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c940095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:46:34 compute-1 sudo[226926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpkrdmppcgoipbicqvxfmzwaudyqvljn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977593.9279647-3929-138442230842259/AnsiballZ_container_config_data.py'
Nov 24 09:46:34 compute-1 sudo[226926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:34 compute-1 python3.9[226928]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 24 09:46:34 compute-1 sudo[226926]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:34 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:34 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c78002470 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:35 compute-1 sudo[227078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhrrxijwnozuzntcgzdnpkfozwokssdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977594.794151-3956-255035995524598/AnsiballZ_container_config_hash.py'
Nov 24 09:46:35 compute-1 sudo[227078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:35.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:35 compute-1 python3.9[227080]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 09:46:35 compute-1 sudo[227078]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:35 compute-1 ceph-mon[80009]: pgmap v580: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Nov 24 09:46:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:35.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:35 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c700032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:35 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:36 compute-1 sudo[227231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrrdortegburgviuxshvwigkmwmgznfu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763977595.7797067-3986-192811178518700/AnsiballZ_edpm_container_manage.py'
Nov 24 09:46:36 compute-1 sudo[227231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:36 compute-1 python3[227233]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 09:46:36 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:36 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c9400a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:37.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:37 compute-1 ceph-mon[80009]: pgmap v581: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Nov 24 09:46:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:37 : epoch 6924295d : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:46:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:37.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:37 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c78003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:37 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c78003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:38 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c78003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:46:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:39.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:39 compute-1 ceph-mon[80009]: pgmap v582: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:46:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:39.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:39 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c6c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:39 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c6c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:40 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:40 : epoch 6924295d : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:46:40 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:40 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c700032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:41.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:41 compute-1 ceph-mon[80009]: pgmap v583: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:46:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:41 : epoch 6924295d : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:46:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:41.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:41 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:41 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c9400a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:42 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c6c001b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:43 compute-1 podman[227289]: 2025-11-24 09:46:43.120060452 +0000 UTC m=+0.852095031 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:46:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:43.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:43 compute-1 sudo[227326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:46:43 compute-1 sudo[227326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:46:43 compute-1 sudo[227326]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:43.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:43 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c70004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:43 compute-1 kernel: ganesha.nfsd[223676]: segfault at 50 ip 00007f1d42f0e32e sp 00007f1d07ffe210 error 4 in libntirpc.so.5.8[7f1d42ef3000+2c000] likely on CPU 7 (core 0, socket 7)
Nov 24 09:46:43 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:46:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:43 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840037a0 fd 39 proxy ignored for local
Nov 24 09:46:43 compute-1 systemd[1]: Started Process Core Dump (PID 227352/UID 0).
Nov 24 09:46:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:46:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:45.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:46:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:46:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:45.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:46 compute-1 ceph-mon[80009]: pgmap v584: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Nov 24 09:46:46 compute-1 systemd-coredump[227353]: Process 222400 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 46:
                                                    #0  0x00007f1d42f0e32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 24 09:46:46 compute-1 systemd[1]: systemd-coredump@9-227352-0.service: Deactivated successfully.
Nov 24 09:46:46 compute-1 systemd[1]: systemd-coredump@9-227352-0.service: Consumed 1.134s CPU time.
Nov 24 09:46:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:47.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:47 compute-1 ceph-mon[80009]: pgmap v585: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:46:47 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:46:47 compute-1 ceph-mon[80009]: pgmap v586: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:46:47 compute-1 podman[227363]: 2025-11-24 09:46:47.261389991 +0000 UTC m=+0.574293543 container died 8bf9bf8a7bbfa60aa69b766603edc552d95a53eeb4b60bc9ac8acd9912952fb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 09:46:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:47.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094648 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:46:49 compute-1 ceph-mon[80009]: pgmap v587: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Nov 24 09:46:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:49.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:46:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:49.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:50 compute-1 systemd[1]: var-lib-containers-storage-overlay-6b2de1769c175f32e837a1c6e5836b2db53c7621b3262e7d6f7d0028b6faa381-merged.mount: Deactivated successfully.
Nov 24 09:46:50 compute-1 podman[227363]: 2025-11-24 09:46:50.561183085 +0000 UTC m=+3.874086617 container remove 8bf9bf8a7bbfa60aa69b766603edc552d95a53eeb4b60bc9ac8acd9912952fb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:46:50 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:46:50 compute-1 podman[227245]: 2025-11-24 09:46:50.598740303 +0000 UTC m=+14.157810848 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 24 09:46:50 compute-1 podman[227383]: 2025-11-24 09:46:50.636248872 +0000 UTC m=+1.368622089 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:46:50 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 09:46:50 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.335s CPU time.
Nov 24 09:46:50 compute-1 podman[227460]: 2025-11-24 09:46:50.759033603 +0000 UTC m=+0.045890599 container create fe9899ea690749a9a0b3b5d5bfc012192a73e99876f03d91fe6d6c78aff266e9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 24 09:46:50 compute-1 podman[227460]: 2025-11-24 09:46:50.736418307 +0000 UTC m=+0.023275323 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 24 09:46:50 compute-1 python3[227233]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 24 09:46:50 compute-1 sudo[227231]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:51 compute-1 ceph-mon[80009]: pgmap v588: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Nov 24 09:46:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:51.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:51 compute-1 sudo[227650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-melxiiwzudbjcivygufqqjawkpvljtil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977611.0998325-4010-267488844744574/AnsiballZ_stat.py'
Nov 24 09:46:51 compute-1 sudo[227650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:51 compute-1 python3.9[227652]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:46:51 compute-1 sudo[227650]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:51.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094651 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:46:52 compute-1 sudo[227805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdfqvpftftyrytmxwuurysrqyyrjlmhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977612.238903-4046-134826746513576/AnsiballZ_container_config_data.py'
Nov 24 09:46:52 compute-1 sudo[227805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:52 compute-1 python3.9[227807]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 24 09:46:52 compute-1 sudo[227805]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:53 compute-1 ceph-mon[80009]: pgmap v589: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Nov 24 09:46:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:46:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:53.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:46:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:53.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:46:54 compute-1 sudo[227958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdbktrwaqdbgioehitvlzkwjdsowcrqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977614.40627-4073-37840924413622/AnsiballZ_container_config_hash.py'
Nov 24 09:46:54 compute-1 sudo[227958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:54 compute-1 python3.9[227960]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 09:46:54 compute-1 sudo[227958]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:55 compute-1 ceph-mon[80009]: pgmap v590: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 494 B/s wr, 1 op/s
Nov 24 09:46:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:55.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094655 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:46:55 compute-1 sudo[228110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpbpkcwpjxlzwwvdcpmidbtrrnnpqgmq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763977615.35751-4103-132865509045433/AnsiballZ_edpm_container_manage.py'
Nov 24 09:46:55 compute-1 sudo[228110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:55 compute-1 python3[228112]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 09:46:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:55.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:56 compute-1 podman[228149]: 2025-11-24 09:46:56.06272671 +0000 UTC m=+0.059989211 container create 4f12c09c2b5a5f528cb4999d6b76c38bdb5027d103adcf8bdc114c4275996463 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:46:56 compute-1 podman[228149]: 2025-11-24 09:46:56.034078773 +0000 UTC m=+0.031341284 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 24 09:46:56 compute-1 python3[228112]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 24 09:46:56 compute-1 sudo[228110]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:56 compute-1 sudo[228339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxiwirysmgovaocfwmwcqsiqifvsnjrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977616.629871-4127-35281168379058/AnsiballZ_stat.py'
Nov 24 09:46:56 compute-1 sudo[228339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:57 compute-1 python3.9[228341]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:46:57 compute-1 sudo[228339]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:57 compute-1 ceph-mon[80009]: pgmap v591: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 494 B/s wr, 1 op/s
Nov 24 09:46:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:57.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:57 compute-1 sudo[228506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isjeahdyivspfyienranskzudwewfjmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977617.4196482-4154-27980695575713/AnsiballZ_file.py'
Nov 24 09:46:57 compute-1 sudo[228506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:57 compute-1 podman[228467]: 2025-11-24 09:46:57.717835912 +0000 UTC m=+0.060574676 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 09:46:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:57.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:57 compute-1 python3.9[228514]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:46:57 compute-1 sudo[228506]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:58 compute-1 sudo[228664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqzxhbpwkezytlgajupvlyhfilipnudr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977617.9770048-4154-184015674217826/AnsiballZ_copy.py'
Nov 24 09:46:58 compute-1 sudo[228664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:58 compute-1 python3.9[228666]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763977617.9770048-4154-184015674217826/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:46:58 compute-1 sudo[228664]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:58 compute-1 sudo[228740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdatajjleshxhwbbyrygqxdxuayxhgfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977617.9770048-4154-184015674217826/AnsiballZ_systemd.py'
Nov 24 09:46:58 compute-1 sudo[228740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:58 compute-1 python3.9[228742]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 09:46:58 compute-1 systemd[1]: Reloading.
Nov 24 09:46:59 compute-1 systemd-rc-local-generator[228768]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:46:59 compute-1 systemd-sysv-generator[228771]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:46:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:59.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:59 compute-1 ceph-mon[80009]: pgmap v592: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 494 B/s wr, 1 op/s
Nov 24 09:46:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:46:59 compute-1 sudo[228740]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:59 compute-1 sudo[228851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhzlvcmdlpagunabsrgacmyzaobwzcsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977617.9770048-4154-184015674217826/AnsiballZ_systemd.py'
Nov 24 09:46:59 compute-1 sudo[228851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:59 compute-1 python3.9[228853]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:46:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:46:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:59.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:59 compute-1 systemd[1]: Reloading.
Nov 24 09:47:00 compute-1 systemd-sysv-generator[228890]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:47:00 compute-1 systemd-rc-local-generator[228886]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:47:00 compute-1 systemd[1]: Starting nova_compute container...
Nov 24 09:47:00 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:47:00 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998428069ec542116c2095d13a6eac80a571eded75fdf90504711c791be6bc21/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:00 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998428069ec542116c2095d13a6eac80a571eded75fdf90504711c791be6bc21/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:00 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998428069ec542116c2095d13a6eac80a571eded75fdf90504711c791be6bc21/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:00 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998428069ec542116c2095d13a6eac80a571eded75fdf90504711c791be6bc21/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:00 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998428069ec542116c2095d13a6eac80a571eded75fdf90504711c791be6bc21/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:00 compute-1 podman[228896]: 2025-11-24 09:47:00.378565023 +0000 UTC m=+0.111622092 container init 4f12c09c2b5a5f528cb4999d6b76c38bdb5027d103adcf8bdc114c4275996463 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 24 09:47:00 compute-1 podman[228896]: 2025-11-24 09:47:00.384583574 +0000 UTC m=+0.117640613 container start 4f12c09c2b5a5f528cb4999d6b76c38bdb5027d103adcf8bdc114c4275996463 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 09:47:00 compute-1 podman[228896]: nova_compute
Nov 24 09:47:00 compute-1 nova_compute[228912]: + sudo -E kolla_set_configs
Nov 24 09:47:00 compute-1 systemd[1]: Started nova_compute container.
Nov 24 09:47:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:47:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:47:00 compute-1 sudo[228851]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Validating config file
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Copying service configuration files
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Deleting /etc/ceph
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Creating directory /etc/ceph
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Setting permission for /etc/ceph
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Writing out command to execute
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 09:47:00 compute-1 nova_compute[228912]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 09:47:00 compute-1 nova_compute[228912]: ++ cat /run_command
Nov 24 09:47:00 compute-1 nova_compute[228912]: + CMD=nova-compute
Nov 24 09:47:00 compute-1 nova_compute[228912]: + ARGS=
Nov 24 09:47:00 compute-1 nova_compute[228912]: + sudo kolla_copy_cacerts
Nov 24 09:47:00 compute-1 nova_compute[228912]: + [[ ! -n '' ]]
Nov 24 09:47:00 compute-1 nova_compute[228912]: + . kolla_extend_start
Nov 24 09:47:00 compute-1 nova_compute[228912]: Running command: 'nova-compute'
Nov 24 09:47:00 compute-1 nova_compute[228912]: + echo 'Running command: '\''nova-compute'\'''
Nov 24 09:47:00 compute-1 nova_compute[228912]: + umask 0022
Nov 24 09:47:00 compute-1 nova_compute[228912]: + exec nova-compute
Nov 24 09:47:00 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 10.
Nov 24 09:47:00 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:47:00 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.335s CPU time.
Nov 24 09:47:00 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:47:00 compute-1 podman[228997]: 2025-11-24 09:47:00.96905602 +0000 UTC m=+0.038703259 container create 43fd0b496718b6eeeae7d88ea5f91542ae91f5585f1476ce3ea629e7cd469e22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:47:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a4b1d9e246d85aca9deb7a685b356722e725e07097b58faac36c6d269f9e1d/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a4b1d9e246d85aca9deb7a685b356722e725e07097b58faac36c6d269f9e1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a4b1d9e246d85aca9deb7a685b356722e725e07097b58faac36c6d269f9e1d/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a4b1d9e246d85aca9deb7a685b356722e725e07097b58faac36c6d269f9e1d/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:01 compute-1 podman[228997]: 2025-11-24 09:47:01.020986839 +0000 UTC m=+0.090634128 container init 43fd0b496718b6eeeae7d88ea5f91542ae91f5585f1476ce3ea629e7cd469e22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 24 09:47:01 compute-1 podman[228997]: 2025-11-24 09:47:01.029927392 +0000 UTC m=+0.099574641 container start 43fd0b496718b6eeeae7d88ea5f91542ae91f5585f1476ce3ea629e7cd469e22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 09:47:01 compute-1 bash[228997]: 43fd0b496718b6eeeae7d88ea5f91542ae91f5585f1476ce3ea629e7cd469e22
Nov 24 09:47:01 compute-1 podman[228997]: 2025-11-24 09:47:00.950870196 +0000 UTC m=+0.020517465 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:47:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:01 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:47:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:01 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:47:01 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:47:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:01 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:47:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:01 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:47:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:01 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:47:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:01 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:47:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:01 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:47:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:01.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:01 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:47:01 compute-1 ceph-mon[80009]: pgmap v593: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 576 B/s rd, 164 B/s wr, 0 op/s
Nov 24 09:47:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:47:01 compute-1 python3.9[229180]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:47:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:01.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:02 compute-1 python3.9[229331]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:47:02 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 09:47:02 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3841 writes, 21K keys, 3841 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.05 MB/s
                                           Cumulative WAL: 3841 writes, 3841 syncs, 1.00 writes per sync, written: 0.05 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1374 writes, 6698 keys, 1374 commit groups, 1.0 writes per commit group, ingest: 16.11 MB, 0.03 MB/s
                                           Interval WAL: 1374 writes, 1374 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    132.9      0.25              0.07        10    0.025       0      0       0.0       0.0
                                             L6      1/0   12.35 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5    140.2    118.9      0.96              0.26         9    0.106     44K   4819       0.0       0.0
                                            Sum      1/0   12.35 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5    111.5    121.7      1.21              0.33        19    0.063     44K   4819       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.4    120.5    120.5      0.53              0.16         8    0.066     22K   2563       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    140.2    118.9      0.96              0.26         9    0.106     44K   4819       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    133.8      0.24              0.07         9    0.027       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.032, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.14 GB write, 0.12 MB/s write, 0.13 GB read, 0.11 MB/s read, 1.2 seconds
                                           Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a5fe7f5350#2 capacity: 304.00 MB usage: 8.50 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000101 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(480,8.12 MB,2.6726%) FilterBlock(19,131.05 KB,0.0420972%) IndexBlock(19,252.30 KB,0.0810473%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 09:47:02 compute-1 nova_compute[228912]: 2025-11-24 09:47:02.914 228916 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 09:47:02 compute-1 nova_compute[228912]: 2025-11-24 09:47:02.915 228916 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 09:47:02 compute-1 nova_compute[228912]: 2025-11-24 09:47:02.915 228916 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 09:47:02 compute-1 nova_compute[228912]: 2025-11-24 09:47:02.915 228916 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.065 228916 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.082 228916 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.082 228916 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 24 09:47:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:03.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:03 compute-1 ceph-mon[80009]: pgmap v594: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 988 B/s rd, 247 B/s wr, 1 op/s
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.552 228916 INFO nova.virt.driver [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 24 09:47:03 compute-1 python3.9[229485]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.728 228916 INFO nova.compute.provider_config [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.760 228916 DEBUG oslo_concurrency.lockutils [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.760 228916 DEBUG oslo_concurrency.lockutils [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.760 228916 DEBUG oslo_concurrency.lockutils [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.761 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.761 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.761 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.761 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.762 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.762 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.762 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.762 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.762 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.762 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.763 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.763 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.763 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.763 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.763 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.763 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.763 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.764 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.764 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.764 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.764 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.764 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.764 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.765 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.765 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.765 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.765 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.765 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.766 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.766 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.766 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.766 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.766 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.766 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.767 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.767 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.767 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.767 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.767 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.767 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.767 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.768 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.768 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.768 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.768 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.768 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.768 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.768 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.769 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.769 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.769 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.769 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.769 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.769 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.770 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.770 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.770 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.770 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.770 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.770 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.771 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.771 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.771 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.771 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.771 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.771 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.771 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.772 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.772 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.772 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.772 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.772 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.772 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.772 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.773 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.773 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.773 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.773 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.773 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.773 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.774 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.774 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.774 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.774 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.774 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.774 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.775 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.775 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.775 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.775 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.775 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.775 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.775 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.776 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.776 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.776 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.776 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.776 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.776 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.777 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.777 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.777 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.777 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.777 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.777 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.778 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.778 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.778 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.778 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.778 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.778 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.779 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.779 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.779 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.779 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.779 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.779 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.780 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.780 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.780 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.780 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.780 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.780 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.781 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.781 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.781 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.781 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.781 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.781 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.781 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.782 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.782 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.782 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.782 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.782 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.783 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.783 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.783 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.783 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.783 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.783 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.784 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.784 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.784 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.784 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.784 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.785 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.785 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.785 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.785 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.785 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.785 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.785 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.786 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.786 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.786 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.786 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.786 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.786 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.787 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.787 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.787 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.787 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.787 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.787 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.787 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.788 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.788 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.788 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.788 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.788 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.788 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.788 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.789 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.789 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.789 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.789 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.789 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.789 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.790 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.790 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.790 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.790 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.790 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.790 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.791 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.791 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.791 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.791 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.791 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.791 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.791 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.791 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.792 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.792 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.792 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.792 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.792 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.792 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.793 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.793 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.793 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.793 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.793 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.793 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.794 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.794 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.794 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.794 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.794 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.794 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.795 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.795 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.795 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.795 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.795 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.795 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.795 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.796 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.796 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.796 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.796 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.796 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.796 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.796 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.797 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.797 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.797 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.797 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.797 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.798 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.798 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.798 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.798 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.798 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.798 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.799 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.799 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.799 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.799 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.799 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.799 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.800 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.800 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.800 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.800 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.800 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.800 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.801 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.801 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.801 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.801 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.801 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.801 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.802 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.802 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.802 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.802 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.802 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.802 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.802 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.803 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.803 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.803 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.803 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.803 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.803 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.804 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.804 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.804 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.804 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.804 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.804 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.805 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.805 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.805 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.805 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.805 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.806 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.806 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.806 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.806 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.806 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.807 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.807 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.807 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.807 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.807 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.807 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.808 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.808 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.808 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.808 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.808 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.808 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.809 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.809 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.809 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.809 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.809 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.809 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.809 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.810 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.810 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.810 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.810 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.810 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.810 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.811 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.811 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.811 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.811 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.811 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.811 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.811 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.812 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.812 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.812 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.812 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.812 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.812 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.812 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.812 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.813 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.813 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.813 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.813 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.813 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.813 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.814 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.814 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.814 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.814 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.814 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.814 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.814 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.815 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.815 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.815 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.815 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.815 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.815 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.815 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.816 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.816 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.816 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.816 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.816 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.816 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.817 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.817 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.817 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.817 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.817 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.817 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.818 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.818 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.818 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.818 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.819 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.819 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.819 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.819 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.819 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.819 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.820 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.820 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.820 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.820 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.820 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.820 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.821 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.821 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.821 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.821 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.821 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.822 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.822 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.822 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.822 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.822 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.823 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.823 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.823 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.823 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.823 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.823 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.824 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.824 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.824 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.824 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.824 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.825 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.825 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.825 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.825 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.825 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.826 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.826 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.826 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.826 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.827 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.827 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.827 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.827 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.827 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.828 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.828 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.828 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.828 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.828 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.829 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.829 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.829 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.829 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.829 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.830 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.830 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.830 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.830 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.830 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.831 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.831 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.831 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.831 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.831 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.832 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.832 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.832 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.832 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.832 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.833 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.833 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.833 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.833 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.833 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.834 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.834 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.834 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.834 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.834 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.835 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.835 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.835 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.835 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.835 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.835 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.835 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.836 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.836 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.836 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.836 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.836 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.836 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.837 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.837 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.837 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.837 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.837 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.837 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.837 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.838 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.838 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.838 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.838 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.839 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.839 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.839 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.839 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.839 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.840 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.840 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.840 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 sudo[229510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.840 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.840 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.841 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.841 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.841 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.841 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.841 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.842 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.842 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.842 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.842 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.842 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.843 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.843 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.843 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.843 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.843 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.843 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.844 228916 WARNING oslo_config.cfg [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 24 09:47:03 compute-1 nova_compute[228912]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 24 09:47:03 compute-1 nova_compute[228912]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 24 09:47:03 compute-1 nova_compute[228912]: and ``live_migration_inbound_addr`` respectively.
Nov 24 09:47:03 compute-1 nova_compute[228912]: ).  Its value may be silently ignored in the future.
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.844 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.844 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.844 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.844 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.845 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 sudo[229510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.845 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.845 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.845 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.845 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.845 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.846 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.846 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.846 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.846 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.846 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.847 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.847 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.847 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.847 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.rbd_secret_uuid        = 84a084c3-61a7-5de7-8207-1f88efa59a64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.847 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.847 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.848 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 sudo[229510]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.848 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.848 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.848 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.848 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.849 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.849 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.849 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.849 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.849 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.850 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.850 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.850 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.850 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.850 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.850 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.851 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.851 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.851 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.851 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.851 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.851 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.851 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.852 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.852 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.852 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.852 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.852 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.852 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.853 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.853 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.853 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.853 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.853 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.853 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.854 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.854 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.854 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.854 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.854 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.854 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.854 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.855 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.855 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.855 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.855 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.855 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.855 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.855 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.856 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.856 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.856 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.856 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.856 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.856 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.856 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.857 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.857 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.857 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.857 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.857 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.857 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.858 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.858 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.858 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.858 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.858 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.858 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.858 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.859 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.859 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.859 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.859 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.859 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.859 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.860 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.860 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.860 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.860 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.860 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.860 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.860 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.861 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.861 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.861 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.861 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.861 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.861 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.861 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.862 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.862 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.862 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.862 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.862 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.862 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.862 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.862 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.863 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.863 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.863 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.863 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.863 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.863 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.864 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.864 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.864 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.864 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.864 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.864 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.864 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.865 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.865 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.865 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.865 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.865 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.865 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.865 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.866 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.866 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.866 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.866 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.866 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.866 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.867 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.867 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.867 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.867 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.867 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.867 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.868 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.868 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.868 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.868 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.868 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.868 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.868 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.869 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.869 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.869 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.869 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.869 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.869 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.869 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.870 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.870 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.870 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.870 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.870 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.870 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.870 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.871 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.871 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.871 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.871 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.871 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.871 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.872 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.872 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.872 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.872 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.872 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.872 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.872 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.873 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.873 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.873 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.873 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.873 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.873 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.873 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.874 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.874 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.874 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.874 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.874 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.874 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.875 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.875 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.875 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.875 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.875 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.875 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.876 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.876 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.876 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.876 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.876 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.876 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.876 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.877 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.877 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.877 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.877 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.877 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.877 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.877 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.877 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.878 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.878 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.878 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.878 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.878 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.878 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.878 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:03.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.879 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.879 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.879 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.879 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.879 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.879 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.879 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.880 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.880 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.880 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.880 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.880 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.880 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.880 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.881 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.881 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.881 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.881 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.881 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.881 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.881 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.881 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.882 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.882 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.882 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.882 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.882 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.882 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.883 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.883 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.883 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.883 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.883 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.883 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.884 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.884 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.884 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.884 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.884 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.884 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.885 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.885 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.885 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.885 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.885 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.886 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.886 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.886 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.886 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.886 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.886 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.886 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.886 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.887 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.887 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.887 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.887 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.887 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.887 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.888 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.888 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.888 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.888 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.888 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.888 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.889 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.889 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.889 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.889 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.889 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.889 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.890 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.890 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.890 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.890 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.890 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.890 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.890 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.891 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.891 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.891 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.891 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.891 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.892 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.892 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.892 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.892 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.892 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.892 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.893 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.893 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.893 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.893 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.893 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.893 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.894 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.894 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.894 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.894 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.894 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.894 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.895 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.895 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.895 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.895 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.895 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.896 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.896 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.896 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.896 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.896 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.896 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.896 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.897 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.897 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.897 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.897 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.897 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.897 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.897 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.898 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.898 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.898 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.898 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.898 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.898 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.898 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.899 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.899 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.899 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.899 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.899 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.899 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.899 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.900 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.900 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.900 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.900 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.900 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.900 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.900 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.901 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.901 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.901 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.901 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.901 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.901 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.901 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.902 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.902 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.902 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.902 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.902 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.902 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.902 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.903 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.903 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.903 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.903 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.903 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.903 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.903 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.904 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.904 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.904 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.904 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.904 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.904 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.904 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.905 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.905 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.905 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.905 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.905 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.905 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.906 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.906 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.906 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.906 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.906 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.907 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.907 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.907 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.907 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.907 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.907 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.908 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.908 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.908 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.908 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.908 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.908 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.908 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.909 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.909 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.909 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.909 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.909 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.909 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.910 228916 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.929 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.930 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.930 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.930 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 24 09:47:03 compute-1 systemd[1]: Starting libvirt QEMU daemon...
Nov 24 09:47:03 compute-1 systemd[1]: Started libvirt QEMU daemon.
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.990 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f694a0d07c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.992 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f694a0d07c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 24 09:47:03 compute-1 nova_compute[228912]: 2025-11-24 09:47:03.993 228916 INFO nova.virt.libvirt.driver [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Connection event '1' reason 'None'
Nov 24 09:47:04 compute-1 nova_compute[228912]: 2025-11-24 09:47:04.010 228916 WARNING nova.virt.libvirt.driver [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Cannot update service status on host "compute-1.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Nov 24 09:47:04 compute-1 nova_compute[228912]: 2025-11-24 09:47:04.010 228916 DEBUG nova.virt.libvirt.volume.mount [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 24 09:47:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:47:04 compute-1 sudo[229713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcggcqbgzmkskzkqisecengzcoysevsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977623.9555368-4334-100894834680004/AnsiballZ_podman_container.py'
Nov 24 09:47:04 compute-1 sudo[229713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:47:04 compute-1 python3.9[229715]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 24 09:47:04 compute-1 sudo[229713]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:04 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 09:47:04 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 09:47:04 compute-1 nova_compute[228912]: 2025-11-24 09:47:04.799 228916 INFO nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Libvirt host capabilities <capabilities>
Nov 24 09:47:04 compute-1 nova_compute[228912]: 
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <host>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <uuid>719139db-46ba-4050-a77b-5fa732a73807</uuid>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <cpu>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <arch>x86_64</arch>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model>EPYC-Rome-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <vendor>AMD</vendor>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <microcode version='16777317'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <signature family='23' model='49' stepping='0'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature name='x2apic'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature name='tsc-deadline'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature name='osxsave'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature name='hypervisor'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature name='tsc_adjust'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature name='spec-ctrl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature name='stibp'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature name='arch-capabilities'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature name='ssbd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature name='cmp_legacy'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature name='topoext'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature name='virt-ssbd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature name='lbrv'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature name='tsc-scale'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature name='vmcb-clean'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature name='pause-filter'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature name='pfthreshold'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature name='svme-addr-chk'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature name='rdctl-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature name='skip-l1dfl-vmentry'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature name='mds-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature name='pschange-mc-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <pages unit='KiB' size='4'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <pages unit='KiB' size='2048'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <pages unit='KiB' size='1048576'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </cpu>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <power_management>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <suspend_mem/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </power_management>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <iommu support='no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <migration_features>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <live/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <uri_transports>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <uri_transport>tcp</uri_transport>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <uri_transport>rdma</uri_transport>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </uri_transports>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </migration_features>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <topology>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <cells num='1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <cell id='0'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:           <memory unit='KiB'>7864320</memory>
Nov 24 09:47:04 compute-1 nova_compute[228912]:           <pages unit='KiB' size='4'>1966080</pages>
Nov 24 09:47:04 compute-1 nova_compute[228912]:           <pages unit='KiB' size='2048'>0</pages>
Nov 24 09:47:04 compute-1 nova_compute[228912]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 24 09:47:04 compute-1 nova_compute[228912]:           <distances>
Nov 24 09:47:04 compute-1 nova_compute[228912]:             <sibling id='0' value='10'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:           </distances>
Nov 24 09:47:04 compute-1 nova_compute[228912]:           <cpus num='8'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:           </cpus>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         </cell>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </cells>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </topology>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <cache>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </cache>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <secmodel>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model>selinux</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <doi>0</doi>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </secmodel>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <secmodel>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model>dac</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <doi>0</doi>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </secmodel>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   </host>
Nov 24 09:47:04 compute-1 nova_compute[228912]: 
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <guest>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <os_type>hvm</os_type>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <arch name='i686'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <wordsize>32</wordsize>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <domain type='qemu'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <domain type='kvm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </arch>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <features>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <pae/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <nonpae/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <acpi default='on' toggle='yes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <apic default='on' toggle='no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <cpuselection/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <deviceboot/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <disksnapshot default='on' toggle='no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <externalSnapshot/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </features>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   </guest>
Nov 24 09:47:04 compute-1 nova_compute[228912]: 
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <guest>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <os_type>hvm</os_type>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <arch name='x86_64'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <wordsize>64</wordsize>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <domain type='qemu'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <domain type='kvm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </arch>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <features>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <acpi default='on' toggle='yes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <apic default='on' toggle='no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <cpuselection/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <deviceboot/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <disksnapshot default='on' toggle='no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <externalSnapshot/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </features>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   </guest>
Nov 24 09:47:04 compute-1 nova_compute[228912]: 
Nov 24 09:47:04 compute-1 nova_compute[228912]: </capabilities>
Nov 24 09:47:04 compute-1 nova_compute[228912]: 
Nov 24 09:47:04 compute-1 nova_compute[228912]: 2025-11-24 09:47:04.804 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 24 09:47:04 compute-1 nova_compute[228912]: 2025-11-24 09:47:04.827 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 24 09:47:04 compute-1 nova_compute[228912]: <domainCapabilities>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <domain>kvm</domain>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <arch>i686</arch>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <vcpu max='240'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <iothreads supported='yes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <os supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <enum name='firmware'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <loader supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='type'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>rom</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>pflash</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='readonly'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>yes</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>no</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='secure'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>no</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </loader>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   </os>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <cpu>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <mode name='host-passthrough' supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='hostPassthroughMigratable'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>on</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>off</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </mode>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <mode name='maximum' supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='maximumMigratable'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>on</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>off</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </mode>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <mode name='host-model' supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <vendor>AMD</vendor>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='x2apic'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='hypervisor'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='stibp'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='ssbd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='overflow-recov'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='succor'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='ibrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='lbrv'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='tsc-scale'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='flushbyasid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='pause-filter'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='pfthreshold'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='disable' name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </mode>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <mode name='custom' supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Broadwell'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Broadwell-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Broadwell-noTSX'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Broadwell-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Broadwell-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Broadwell-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Broadwell-v4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cooperlake'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cooperlake-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cooperlake-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Denverton'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Denverton-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Denverton-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Denverton-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Dhyana-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Genoa'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amd-psfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='auto-ibrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='stibp-always-on'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amd-psfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='auto-ibrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='stibp-always-on'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Milan'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Milan-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Milan-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amd-psfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='stibp-always-on'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Rome'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Rome-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Rome-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Rome-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-v4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='GraniteRapids'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='prefetchiti'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='GraniteRapids-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='prefetchiti'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='GraniteRapids-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx10'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx10-128'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx10-256'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx10-512'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='prefetchiti'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Haswell'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Haswell-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Haswell-noTSX'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Haswell-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Haswell-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Haswell-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Haswell-v4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v5'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v6'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v7'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='IvyBridge'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='IvyBridge-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='IvyBridge-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='IvyBridge-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='KnightsMill'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512er'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512pf'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='KnightsMill-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512er'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512pf'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Opteron_G4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xop'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Opteron_G4-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xop'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Opteron_G5'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tbm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xop'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Opteron_G5-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tbm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xop'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='SapphireRapids'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='SapphireRapids-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='SapphireRapids-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='SapphireRapids-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='SierraForest'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cmpccxadd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='SierraForest-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cmpccxadd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client-v4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-v4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-v5'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Snowridge'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Snowridge-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Snowridge-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Snowridge-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Snowridge-v4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='athlon'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='athlon-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='core2duo'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='core2duo-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='coreduo'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='coreduo-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='n270'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='n270-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='phenom'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='phenom-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </mode>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   </cpu>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <memoryBacking supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <enum name='sourceType'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <value>file</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <value>anonymous</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <value>memfd</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   </memoryBacking>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <devices>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <disk supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='diskDevice'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>disk</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>cdrom</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>floppy</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>lun</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='bus'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>ide</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>fdc</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>scsi</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtio</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>usb</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>sata</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='model'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtio</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtio-transitional</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtio-non-transitional</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </disk>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <graphics supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='type'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>vnc</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>egl-headless</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>dbus</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </graphics>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <video supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='modelType'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>vga</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>cirrus</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtio</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>none</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>bochs</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>ramfb</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </video>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <hostdev supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='mode'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>subsystem</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='startupPolicy'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>default</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>mandatory</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>requisite</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>optional</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='subsysType'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>usb</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>pci</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>scsi</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='capsType'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='pciBackend'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </hostdev>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <rng supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='model'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtio</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtio-transitional</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtio-non-transitional</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='backendModel'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>random</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>egd</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>builtin</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </rng>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <filesystem supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='driverType'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>path</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>handle</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtiofs</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </filesystem>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <tpm supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='model'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>tpm-tis</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>tpm-crb</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='backendModel'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>emulator</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>external</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='backendVersion'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>2.0</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </tpm>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <redirdev supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='bus'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>usb</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </redirdev>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <channel supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='type'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>pty</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>unix</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </channel>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <crypto supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='model'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='type'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>qemu</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='backendModel'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>builtin</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </crypto>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <interface supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='backendType'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>default</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>passt</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </interface>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <panic supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='model'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>isa</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>hyperv</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </panic>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <console supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='type'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>null</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>vc</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>pty</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>dev</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>file</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>pipe</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>stdio</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>udp</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>tcp</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>unix</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>qemu-vdagent</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>dbus</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </console>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   </devices>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <features>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <gic supported='no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <vmcoreinfo supported='yes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <genid supported='yes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <backingStoreInput supported='yes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <backup supported='yes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <async-teardown supported='yes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <ps2 supported='yes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <sev supported='no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <sgx supported='no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <hyperv supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='features'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>relaxed</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>vapic</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>spinlocks</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>vpindex</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>runtime</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>synic</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>stimer</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>reset</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>vendor_id</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>frequencies</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>reenlightenment</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>tlbflush</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>ipi</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>avic</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>emsr_bitmap</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>xmm_input</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <defaults>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <spinlocks>4095</spinlocks>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <stimer_direct>on</stimer_direct>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </defaults>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </hyperv>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <launchSecurity supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='sectype'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>tdx</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </launchSecurity>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   </features>
Nov 24 09:47:04 compute-1 nova_compute[228912]: </domainCapabilities>
Nov 24 09:47:04 compute-1 nova_compute[228912]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 09:47:04 compute-1 nova_compute[228912]: 2025-11-24 09:47:04.832 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 24 09:47:04 compute-1 nova_compute[228912]: <domainCapabilities>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <domain>kvm</domain>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <arch>i686</arch>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <vcpu max='4096'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <iothreads supported='yes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <os supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <enum name='firmware'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <loader supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='type'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>rom</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>pflash</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='readonly'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>yes</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>no</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='secure'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>no</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </loader>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   </os>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <cpu>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <mode name='host-passthrough' supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='hostPassthroughMigratable'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>on</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>off</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </mode>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <mode name='maximum' supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='maximumMigratable'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>on</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>off</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </mode>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <mode name='host-model' supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <vendor>AMD</vendor>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='x2apic'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='hypervisor'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='stibp'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='ssbd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='overflow-recov'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='succor'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='ibrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='lbrv'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='tsc-scale'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='flushbyasid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='pause-filter'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='pfthreshold'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='disable' name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </mode>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <mode name='custom' supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Broadwell'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Broadwell-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Broadwell-noTSX'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Broadwell-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Broadwell-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Broadwell-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Broadwell-v4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cooperlake'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cooperlake-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cooperlake-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Denverton'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Denverton-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Denverton-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Denverton-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Dhyana-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Genoa'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amd-psfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='auto-ibrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='stibp-always-on'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amd-psfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='auto-ibrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='stibp-always-on'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Milan'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Milan-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Milan-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amd-psfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='stibp-always-on'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Rome'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Rome-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Rome-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Rome-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-v4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='GraniteRapids'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='prefetchiti'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='GraniteRapids-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='prefetchiti'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='GraniteRapids-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx10'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx10-128'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx10-256'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx10-512'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='prefetchiti'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Haswell'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Haswell-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Haswell-noTSX'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Haswell-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Haswell-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Haswell-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Haswell-v4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v5'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v6'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v7'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='IvyBridge'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='IvyBridge-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='IvyBridge-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='IvyBridge-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='KnightsMill'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512er'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512pf'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='KnightsMill-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512er'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512pf'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Opteron_G4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xop'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Opteron_G4-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xop'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Opteron_G5'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tbm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xop'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Opteron_G5-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tbm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xop'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='SapphireRapids'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='SapphireRapids-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='SapphireRapids-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='SapphireRapids-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='SierraForest'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cmpccxadd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='SierraForest-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cmpccxadd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client-v4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-v4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-v5'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Snowridge'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Snowridge-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Snowridge-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Snowridge-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Snowridge-v4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='athlon'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='athlon-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='core2duo'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='core2duo-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='coreduo'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='coreduo-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='n270'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='n270-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='phenom'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='phenom-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </mode>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   </cpu>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <memoryBacking supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <enum name='sourceType'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <value>file</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <value>anonymous</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <value>memfd</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   </memoryBacking>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <devices>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <disk supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='diskDevice'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>disk</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>cdrom</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>floppy</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>lun</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='bus'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>fdc</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>scsi</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtio</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>usb</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>sata</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='model'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtio</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtio-transitional</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtio-non-transitional</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </disk>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <graphics supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='type'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>vnc</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>egl-headless</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>dbus</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </graphics>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <video supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='modelType'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>vga</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>cirrus</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtio</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>none</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>bochs</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>ramfb</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </video>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <hostdev supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='mode'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>subsystem</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='startupPolicy'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>default</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>mandatory</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>requisite</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>optional</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='subsysType'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>usb</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>pci</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>scsi</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='capsType'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='pciBackend'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </hostdev>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <rng supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='model'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtio</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtio-transitional</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtio-non-transitional</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='backendModel'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>random</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>egd</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>builtin</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </rng>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <filesystem supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='driverType'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>path</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>handle</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtiofs</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </filesystem>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <tpm supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='model'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>tpm-tis</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>tpm-crb</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='backendModel'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>emulator</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>external</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='backendVersion'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>2.0</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </tpm>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <redirdev supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='bus'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>usb</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </redirdev>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <channel supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='type'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>pty</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>unix</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </channel>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <crypto supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='model'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='type'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>qemu</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='backendModel'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>builtin</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </crypto>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <interface supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='backendType'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>default</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>passt</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </interface>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <panic supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='model'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>isa</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>hyperv</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </panic>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <console supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='type'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>null</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>vc</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>pty</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>dev</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>file</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>pipe</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>stdio</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>udp</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>tcp</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>unix</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>qemu-vdagent</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>dbus</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </console>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   </devices>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <features>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <gic supported='no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <vmcoreinfo supported='yes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <genid supported='yes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <backingStoreInput supported='yes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <backup supported='yes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <async-teardown supported='yes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <ps2 supported='yes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <sev supported='no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <sgx supported='no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <hyperv supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='features'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>relaxed</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>vapic</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>spinlocks</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>vpindex</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>runtime</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>synic</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>stimer</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>reset</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>vendor_id</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>frequencies</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>reenlightenment</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>tlbflush</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>ipi</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>avic</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>emsr_bitmap</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>xmm_input</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <defaults>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <spinlocks>4095</spinlocks>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <stimer_direct>on</stimer_direct>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </defaults>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </hyperv>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <launchSecurity supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='sectype'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>tdx</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </launchSecurity>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   </features>
Nov 24 09:47:04 compute-1 nova_compute[228912]: </domainCapabilities>
Nov 24 09:47:04 compute-1 nova_compute[228912]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 09:47:04 compute-1 nova_compute[228912]: 2025-11-24 09:47:04.858 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 24 09:47:04 compute-1 nova_compute[228912]: 2025-11-24 09:47:04.862 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 24 09:47:04 compute-1 nova_compute[228912]: <domainCapabilities>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <domain>kvm</domain>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <arch>x86_64</arch>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <vcpu max='240'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <iothreads supported='yes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <os supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <enum name='firmware'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <loader supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='type'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>rom</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>pflash</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='readonly'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>yes</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>no</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='secure'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>no</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </loader>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   </os>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <cpu>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <mode name='host-passthrough' supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='hostPassthroughMigratable'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>on</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>off</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </mode>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <mode name='maximum' supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='maximumMigratable'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>on</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>off</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </mode>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <mode name='host-model' supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <vendor>AMD</vendor>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='x2apic'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='hypervisor'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='stibp'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='ssbd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='overflow-recov'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='succor'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='ibrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='lbrv'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='tsc-scale'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='flushbyasid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='pause-filter'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='pfthreshold'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='disable' name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </mode>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <mode name='custom' supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Broadwell'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Broadwell-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Broadwell-noTSX'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Broadwell-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Broadwell-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Broadwell-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Broadwell-v4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cooperlake'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cooperlake-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Cooperlake-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Denverton'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Denverton-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Denverton-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Denverton-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Dhyana-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Genoa'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amd-psfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='auto-ibrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='stibp-always-on'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amd-psfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='auto-ibrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='stibp-always-on'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Milan'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Milan-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Milan-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amd-psfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='stibp-always-on'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Rome'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Rome-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Rome-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-Rome-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='EPYC-v4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='GraniteRapids'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='prefetchiti'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='GraniteRapids-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='prefetchiti'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='GraniteRapids-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx10'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx10-128'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx10-256'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx10-512'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='prefetchiti'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Haswell'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Haswell-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Haswell-noTSX'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Haswell-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Haswell-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Haswell-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Haswell-v4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v5'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v6'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v7'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='IvyBridge'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='IvyBridge-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='IvyBridge-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='IvyBridge-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='KnightsMill'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512er'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512pf'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='KnightsMill-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512er'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512pf'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Opteron_G4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xop'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Opteron_G4-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xop'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Opteron_G5'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tbm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xop'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Opteron_G5-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tbm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xop'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='SapphireRapids'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='SapphireRapids-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='SapphireRapids-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='SapphireRapids-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='SierraForest'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cmpccxadd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='SierraForest-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-ifma'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cmpccxadd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client-v4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-v4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-v5'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Snowridge'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Snowridge-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Snowridge-v2'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Snowridge-v3'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='Snowridge-v4'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='athlon'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='athlon-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='core2duo'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='core2duo-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='coreduo'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='coreduo-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='n270'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='n270-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='phenom'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <blockers model='phenom-v1'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </mode>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   </cpu>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <memoryBacking supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <enum name='sourceType'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <value>file</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <value>anonymous</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <value>memfd</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   </memoryBacking>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <devices>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <disk supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='diskDevice'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>disk</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>cdrom</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>floppy</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>lun</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='bus'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>ide</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>fdc</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>scsi</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtio</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>usb</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>sata</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='model'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtio</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtio-transitional</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtio-non-transitional</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </disk>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <graphics supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='type'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>vnc</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>egl-headless</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>dbus</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </graphics>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <video supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='modelType'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>vga</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>cirrus</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtio</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>none</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>bochs</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>ramfb</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </video>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <hostdev supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='mode'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>subsystem</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='startupPolicy'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>default</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>mandatory</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>requisite</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>optional</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='subsysType'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>usb</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>pci</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>scsi</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='capsType'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='pciBackend'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </hostdev>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <rng supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='model'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtio</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtio-transitional</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtio-non-transitional</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='backendModel'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>random</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>egd</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>builtin</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </rng>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <filesystem supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='driverType'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>path</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>handle</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>virtiofs</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </filesystem>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <tpm supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='model'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>tpm-tis</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>tpm-crb</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='backendModel'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>emulator</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>external</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='backendVersion'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>2.0</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </tpm>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <redirdev supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='bus'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>usb</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </redirdev>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <channel supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='type'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>pty</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>unix</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </channel>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <crypto supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='model'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='type'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>qemu</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='backendModel'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>builtin</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </crypto>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <interface supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='backendType'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>default</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>passt</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </interface>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <panic supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='model'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>isa</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>hyperv</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </panic>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <console supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='type'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>null</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>vc</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>pty</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>dev</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>file</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>pipe</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>stdio</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>udp</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>tcp</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>unix</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>qemu-vdagent</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>dbus</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </console>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   </devices>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <features>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <gic supported='no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <vmcoreinfo supported='yes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <genid supported='yes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <backingStoreInput supported='yes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <backup supported='yes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <async-teardown supported='yes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <ps2 supported='yes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <sev supported='no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <sgx supported='no'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <hyperv supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='features'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>relaxed</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>vapic</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>spinlocks</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>vpindex</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>runtime</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>synic</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>stimer</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>reset</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>vendor_id</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>frequencies</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>reenlightenment</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>tlbflush</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>ipi</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>avic</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>emsr_bitmap</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>xmm_input</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <defaults>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <spinlocks>4095</spinlocks>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <stimer_direct>on</stimer_direct>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </defaults>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </hyperv>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <launchSecurity supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='sectype'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>tdx</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </launchSecurity>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   </features>
Nov 24 09:47:04 compute-1 nova_compute[228912]: </domainCapabilities>
Nov 24 09:47:04 compute-1 nova_compute[228912]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 09:47:04 compute-1 nova_compute[228912]: 2025-11-24 09:47:04.921 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 24 09:47:04 compute-1 nova_compute[228912]: <domainCapabilities>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <domain>kvm</domain>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <arch>x86_64</arch>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <vcpu max='4096'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <iothreads supported='yes'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <os supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <enum name='firmware'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <value>efi</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <loader supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='type'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>rom</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>pflash</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='readonly'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>yes</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>no</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='secure'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>yes</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>no</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </loader>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   </os>
Nov 24 09:47:04 compute-1 nova_compute[228912]:   <cpu>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <mode name='host-passthrough' supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='hostPassthroughMigratable'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>on</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>off</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </mode>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <mode name='maximum' supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <enum name='maximumMigratable'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>on</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:         <value>off</value>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     </mode>
Nov 24 09:47:04 compute-1 nova_compute[228912]:     <mode name='host-model' supported='yes'>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <vendor>AMD</vendor>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='x2apic'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='hypervisor'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='stibp'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='ssbd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='overflow-recov'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='succor'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='ibrs'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='lbrv'/>
Nov 24 09:47:04 compute-1 nova_compute[228912]:       <feature policy='require' name='tsc-scale'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <feature policy='require' name='flushbyasid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <feature policy='require' name='pause-filter'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <feature policy='require' name='pfthreshold'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <feature policy='disable' name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     </mode>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <mode name='custom' supported='yes'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Broadwell'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Broadwell-IBRS'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Broadwell-noTSX'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Broadwell-v1'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Broadwell-v2'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Broadwell-v3'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Broadwell-v4'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Cooperlake'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Cooperlake-v1'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Cooperlake-v2'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Denverton'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='mpx'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Denverton-v1'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='mpx'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Denverton-v2'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Denverton-v3'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Dhyana-v2'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='EPYC-Genoa'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amd-psfd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='auto-ibrs'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='stibp-always-on'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amd-psfd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='auto-ibrs'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='stibp-always-on'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='EPYC-Milan'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='EPYC-Milan-v1'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='EPYC-Milan-v2'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amd-psfd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='stibp-always-on'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='EPYC-Rome'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='EPYC-Rome-v1'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='EPYC-Rome-v2'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='EPYC-Rome-v3'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='EPYC-v3'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='EPYC-v4'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='GraniteRapids'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amx-fp16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='mcdt-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pbrsb-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='prefetchiti'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='GraniteRapids-v1'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amx-fp16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='mcdt-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pbrsb-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='prefetchiti'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='GraniteRapids-v2'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amx-fp16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx10'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx10-128'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx10-256'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx10-512'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='mcdt-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pbrsb-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='prefetchiti'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Haswell'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Haswell-IBRS'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Haswell-noTSX'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Haswell-v1'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Haswell-v2'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Haswell-v3'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Haswell-v4'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v1'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v2'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v3'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v4'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v5'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v6'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Icelake-Server-v7'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='IvyBridge'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='IvyBridge-IBRS'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='IvyBridge-v1'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='IvyBridge-v2'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='KnightsMill'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512er'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512pf'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='KnightsMill-v1'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512er'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512pf'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Opteron_G4'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fma4'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xop'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Opteron_G4-v1'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fma4'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xop'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Opteron_G5'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fma4'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='tbm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xop'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Opteron_G5-v1'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fma4'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='tbm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xop'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='SapphireRapids'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='SapphireRapids-v1'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='SapphireRapids-v2'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='SapphireRapids-v3'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amx-bf16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amx-int8'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='amx-tile'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-bf16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-fp16'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bitalg'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512ifma'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrc'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fzrm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='la57'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='taa-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xfd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='SierraForest'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx-ifma'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='cmpccxadd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='mcdt-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pbrsb-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='SierraForest-v1'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx-ifma'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx-vnni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='cmpccxadd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fbsdp-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='fsrs'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ibrs-all'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='mcdt-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pbrsb-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='psdp-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='serialize'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vaes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client-v1'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client-v2'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client-v3'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Skylake-Client-v4'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-v1'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-v2'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='hle'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='rtm'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-v3'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-v4'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Skylake-Server-v5'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512bw'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512cd'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512dq'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512f'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='avx512vl'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='invpcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pcid'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='pku'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Snowridge'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='core-capability'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='mpx'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='split-lock-detect'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Snowridge-v1'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='core-capability'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='mpx'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='split-lock-detect'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Snowridge-v2'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='core-capability'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='split-lock-detect'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Snowridge-v3'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='core-capability'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='split-lock-detect'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='Snowridge-v4'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='cldemote'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='erms'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='gfni'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='movdir64b'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='movdiri'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='xsaves'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='athlon'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='3dnow'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='3dnowext'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='athlon-v1'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='3dnow'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='3dnowext'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='core2duo'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='core2duo-v1'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='coreduo'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='coreduo-v1'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='n270'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='n270-v1'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='ss'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='phenom'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='3dnow'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='3dnowext'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <blockers model='phenom-v1'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='3dnow'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <feature name='3dnowext'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </blockers>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     </mode>
Nov 24 09:47:05 compute-1 nova_compute[228912]:   </cpu>
Nov 24 09:47:05 compute-1 nova_compute[228912]:   <memoryBacking supported='yes'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <enum name='sourceType'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <value>file</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <value>anonymous</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <value>memfd</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     </enum>
Nov 24 09:47:05 compute-1 nova_compute[228912]:   </memoryBacking>
Nov 24 09:47:05 compute-1 nova_compute[228912]:   <devices>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <disk supported='yes'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='diskDevice'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>disk</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>cdrom</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>floppy</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>lun</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='bus'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>fdc</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>scsi</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>virtio</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>usb</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>sata</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='model'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>virtio</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>virtio-transitional</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>virtio-non-transitional</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     </disk>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <graphics supported='yes'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='type'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>vnc</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>egl-headless</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>dbus</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     </graphics>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <video supported='yes'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='modelType'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>vga</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>cirrus</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>virtio</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>none</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>bochs</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>ramfb</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     </video>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <hostdev supported='yes'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='mode'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>subsystem</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='startupPolicy'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>default</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>mandatory</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>requisite</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>optional</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='subsysType'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>usb</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>pci</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>scsi</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='capsType'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='pciBackend'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     </hostdev>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <rng supported='yes'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='model'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>virtio</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>virtio-transitional</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>virtio-non-transitional</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='backendModel'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>random</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>egd</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>builtin</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     </rng>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <filesystem supported='yes'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='driverType'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>path</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>handle</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>virtiofs</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     </filesystem>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <tpm supported='yes'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='model'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>tpm-tis</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>tpm-crb</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='backendModel'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>emulator</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>external</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='backendVersion'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>2.0</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     </tpm>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <redirdev supported='yes'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='bus'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>usb</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     </redirdev>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <channel supported='yes'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='type'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>pty</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>unix</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     </channel>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <crypto supported='yes'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='model'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='type'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>qemu</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='backendModel'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>builtin</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     </crypto>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <interface supported='yes'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='backendType'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>default</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>passt</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     </interface>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <panic supported='yes'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='model'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>isa</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>hyperv</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     </panic>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <console supported='yes'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='type'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>null</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>vc</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>pty</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>dev</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>file</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>pipe</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>stdio</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>udp</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>tcp</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>unix</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>qemu-vdagent</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>dbus</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     </console>
Nov 24 09:47:05 compute-1 nova_compute[228912]:   </devices>
Nov 24 09:47:05 compute-1 nova_compute[228912]:   <features>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <gic supported='no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <vmcoreinfo supported='yes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <genid supported='yes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <backingStoreInput supported='yes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <backup supported='yes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <async-teardown supported='yes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <ps2 supported='yes'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <sev supported='no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <sgx supported='no'/>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <hyperv supported='yes'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='features'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>relaxed</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>vapic</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>spinlocks</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>vpindex</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>runtime</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>synic</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>stimer</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>reset</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>vendor_id</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>frequencies</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>reenlightenment</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>tlbflush</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>ipi</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>avic</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>emsr_bitmap</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>xmm_input</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <defaults>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <spinlocks>4095</spinlocks>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <stimer_direct>on</stimer_direct>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </defaults>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     </hyperv>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     <launchSecurity supported='yes'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       <enum name='sectype'>
Nov 24 09:47:05 compute-1 nova_compute[228912]:         <value>tdx</value>
Nov 24 09:47:05 compute-1 nova_compute[228912]:       </enum>
Nov 24 09:47:05 compute-1 nova_compute[228912]:     </launchSecurity>
Nov 24 09:47:05 compute-1 nova_compute[228912]:   </features>
Nov 24 09:47:05 compute-1 nova_compute[228912]: </domainCapabilities>
Nov 24 09:47:05 compute-1 nova_compute[228912]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 09:47:05 compute-1 nova_compute[228912]: 2025-11-24 09:47:04.991 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 24 09:47:05 compute-1 nova_compute[228912]: 2025-11-24 09:47:04.991 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 24 09:47:05 compute-1 nova_compute[228912]: 2025-11-24 09:47:04.991 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 24 09:47:05 compute-1 nova_compute[228912]: 2025-11-24 09:47:04.991 228916 INFO nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Secure Boot support detected
Nov 24 09:47:05 compute-1 nova_compute[228912]: 2025-11-24 09:47:04.993 228916 INFO nova.virt.libvirt.driver [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 24 09:47:05 compute-1 nova_compute[228912]: 2025-11-24 09:47:04.993 228916 INFO nova.virt.libvirt.driver [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 24 09:47:05 compute-1 nova_compute[228912]: 2025-11-24 09:47:05.019 228916 DEBUG nova.virt.libvirt.driver [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 24 09:47:05 compute-1 nova_compute[228912]: 2025-11-24 09:47:05.035 228916 INFO nova.virt.node [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Determined node identity 1b7b0f22-dba8-42a8-9de3-763c9152946e from /var/lib/nova/compute_id
Nov 24 09:47:05 compute-1 nova_compute[228912]: 2025-11-24 09:47:05.047 228916 WARNING nova.compute.manager [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Compute nodes ['1b7b0f22-dba8-42a8-9de3-763c9152946e'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 24 09:47:05 compute-1 nova_compute[228912]: 2025-11-24 09:47:05.066 228916 INFO nova.compute.manager [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 24 09:47:05 compute-1 nova_compute[228912]: 2025-11-24 09:47:05.099 228916 WARNING nova.compute.manager [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Nov 24 09:47:05 compute-1 nova_compute[228912]: 2025-11-24 09:47:05.100 228916 DEBUG oslo_concurrency.lockutils [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:47:05 compute-1 nova_compute[228912]: 2025-11-24 09:47:05.100 228916 DEBUG oslo_concurrency.lockutils [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:47:05 compute-1 nova_compute[228912]: 2025-11-24 09:47:05.100 228916 DEBUG oslo_concurrency.lockutils [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:47:05 compute-1 nova_compute[228912]: 2025-11-24 09:47:05.101 228916 DEBUG nova.compute.resource_tracker [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:47:05 compute-1 nova_compute[228912]: 2025-11-24 09:47:05.101 228916 DEBUG oslo_concurrency.processutils [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:47:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:05.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:05 compute-1 ceph-mon[80009]: pgmap v595: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 494 B/s rd, 82 B/s wr, 0 op/s
Nov 24 09:47:05 compute-1 sudo[229920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nktchuuvselopbxpwtfpcecouparhots ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977625.108962-4358-120244010098824/AnsiballZ_systemd.py'
Nov 24 09:47:05 compute-1 sudo[229920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:47:05 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:47:05 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2331233881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:47:05 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:47:05 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/190925437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:47:05 compute-1 nova_compute[228912]: 2025-11-24 09:47:05.557 228916 DEBUG oslo_concurrency.processutils [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:47:05 compute-1 systemd[1]: Starting libvirt nodedev daemon...
Nov 24 09:47:05 compute-1 systemd[1]: Started libvirt nodedev daemon.
Nov 24 09:47:05 compute-1 python3.9[229922]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:47:05 compute-1 systemd[1]: Stopping nova_compute container...
Nov 24 09:47:05 compute-1 nova_compute[228912]: 2025-11-24 09:47:05.798 228916 DEBUG oslo_concurrency.lockutils [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:47:05 compute-1 nova_compute[228912]: 2025-11-24 09:47:05.799 228916 DEBUG oslo_concurrency.lockutils [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:47:05 compute-1 nova_compute[228912]: 2025-11-24 09:47:05.799 228916 DEBUG oslo_concurrency.lockutils [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:47:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:05.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:06 compute-1 virtqemud[229578]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 24 09:47:06 compute-1 virtqemud[229578]: hostname: compute-1
Nov 24 09:47:06 compute-1 virtqemud[229578]: End of file while reading data: Input/output error
Nov 24 09:47:06 compute-1 systemd[1]: libpod-4f12c09c2b5a5f528cb4999d6b76c38bdb5027d103adcf8bdc114c4275996463.scope: Deactivated successfully.
Nov 24 09:47:06 compute-1 systemd[1]: libpod-4f12c09c2b5a5f528cb4999d6b76c38bdb5027d103adcf8bdc114c4275996463.scope: Consumed 3.657s CPU time.
Nov 24 09:47:06 compute-1 podman[229949]: 2025-11-24 09:47:06.238694217 +0000 UTC m=+0.478463657 container died 4f12c09c2b5a5f528cb4999d6b76c38bdb5027d103adcf8bdc114c4275996463 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 09:47:06 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4f12c09c2b5a5f528cb4999d6b76c38bdb5027d103adcf8bdc114c4275996463-userdata-shm.mount: Deactivated successfully.
Nov 24 09:47:06 compute-1 systemd[1]: var-lib-containers-storage-overlay-998428069ec542116c2095d13a6eac80a571eded75fdf90504711c791be6bc21-merged.mount: Deactivated successfully.
Nov 24 09:47:06 compute-1 podman[229949]: 2025-11-24 09:47:06.348063422 +0000 UTC m=+0.587832842 container cleanup 4f12c09c2b5a5f528cb4999d6b76c38bdb5027d103adcf8bdc114c4275996463 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:47:06 compute-1 podman[229949]: nova_compute
Nov 24 09:47:06 compute-1 podman[229982]: nova_compute
Nov 24 09:47:06 compute-1 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 24 09:47:06 compute-1 systemd[1]: Stopped nova_compute container.
Nov 24 09:47:06 compute-1 systemd[1]: Starting nova_compute container...
Nov 24 09:47:06 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2331233881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:47:06 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/190925437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:47:06 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:47:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998428069ec542116c2095d13a6eac80a571eded75fdf90504711c791be6bc21/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998428069ec542116c2095d13a6eac80a571eded75fdf90504711c791be6bc21/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998428069ec542116c2095d13a6eac80a571eded75fdf90504711c791be6bc21/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998428069ec542116c2095d13a6eac80a571eded75fdf90504711c791be6bc21/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998428069ec542116c2095d13a6eac80a571eded75fdf90504711c791be6bc21/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:06 compute-1 podman[229994]: 2025-11-24 09:47:06.501024007 +0000 UTC m=+0.072852083 container init 4f12c09c2b5a5f528cb4999d6b76c38bdb5027d103adcf8bdc114c4275996463 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Nov 24 09:47:06 compute-1 podman[229994]: 2025-11-24 09:47:06.510810532 +0000 UTC m=+0.082638598 container start 4f12c09c2b5a5f528cb4999d6b76c38bdb5027d103adcf8bdc114c4275996463 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:47:06 compute-1 nova_compute[230010]: + sudo -E kolla_set_configs
Nov 24 09:47:06 compute-1 podman[229994]: nova_compute
Nov 24 09:47:06 compute-1 systemd[1]: Started nova_compute container.
Nov 24 09:47:06 compute-1 sudo[229920]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Validating config file
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Copying service configuration files
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Deleting /etc/ceph
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Creating directory /etc/ceph
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Setting permission for /etc/ceph
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Writing out command to execute
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 09:47:06 compute-1 nova_compute[230010]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 09:47:06 compute-1 nova_compute[230010]: ++ cat /run_command
Nov 24 09:47:06 compute-1 nova_compute[230010]: + CMD=nova-compute
Nov 24 09:47:06 compute-1 nova_compute[230010]: + ARGS=
Nov 24 09:47:06 compute-1 nova_compute[230010]: + sudo kolla_copy_cacerts
Nov 24 09:47:06 compute-1 nova_compute[230010]: + [[ ! -n '' ]]
Nov 24 09:47:06 compute-1 nova_compute[230010]: + . kolla_extend_start
Nov 24 09:47:06 compute-1 nova_compute[230010]: Running command: 'nova-compute'
Nov 24 09:47:06 compute-1 nova_compute[230010]: + echo 'Running command: '\''nova-compute'\'''
Nov 24 09:47:06 compute-1 nova_compute[230010]: + umask 0022
Nov 24 09:47:06 compute-1 nova_compute[230010]: + exec nova-compute
Nov 24 09:47:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:07.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:07 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:47:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:07 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:47:07 compute-1 ceph-mon[80009]: pgmap v596: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:47:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:07.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:07 compute-1 sudo[230173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjpbikairknxwkhhznosmlzfyasyoajv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977627.6516476-4385-162441378436574/AnsiballZ_podman_container.py'
Nov 24 09:47:07 compute-1 sudo[230173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:47:08 compute-1 python3.9[230175]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 24 09:47:08 compute-1 systemd[1]: Started libpod-conmon-fe9899ea690749a9a0b3b5d5bfc012192a73e99876f03d91fe6d6c78aff266e9.scope.
Nov 24 09:47:08 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:47:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a160e91f30301b4dc4cb2eee53a414fa47f33567beea5b3af9dd52a065dbae/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a160e91f30301b4dc4cb2eee53a414fa47f33567beea5b3af9dd52a065dbae/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a160e91f30301b4dc4cb2eee53a414fa47f33567beea5b3af9dd52a065dbae/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:08 compute-1 podman[230200]: 2025-11-24 09:47:08.376089129 +0000 UTC m=+0.116083234 container init fe9899ea690749a9a0b3b5d5bfc012192a73e99876f03d91fe6d6c78aff266e9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Nov 24 09:47:08 compute-1 podman[230200]: 2025-11-24 09:47:08.384645843 +0000 UTC m=+0.124639928 container start fe9899ea690749a9a0b3b5d5bfc012192a73e99876f03d91fe6d6c78aff266e9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:47:08 compute-1 python3.9[230175]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 24 09:47:08 compute-1 nova_compute_init[230221]: INFO:nova_statedir:Applying nova statedir ownership
Nov 24 09:47:08 compute-1 nova_compute_init[230221]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 24 09:47:08 compute-1 nova_compute_init[230221]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 24 09:47:08 compute-1 nova_compute_init[230221]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 24 09:47:08 compute-1 nova_compute_init[230221]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 24 09:47:08 compute-1 nova_compute_init[230221]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 24 09:47:08 compute-1 nova_compute_init[230221]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 24 09:47:08 compute-1 nova_compute_init[230221]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 24 09:47:08 compute-1 nova_compute_init[230221]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 24 09:47:08 compute-1 nova_compute_init[230221]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 24 09:47:08 compute-1 nova_compute_init[230221]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 24 09:47:08 compute-1 nova_compute_init[230221]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 24 09:47:08 compute-1 nova_compute_init[230221]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 24 09:47:08 compute-1 nova_compute_init[230221]: INFO:nova_statedir:Nova statedir ownership complete
Nov 24 09:47:08 compute-1 systemd[1]: libpod-fe9899ea690749a9a0b3b5d5bfc012192a73e99876f03d91fe6d6c78aff266e9.scope: Deactivated successfully.
Nov 24 09:47:08 compute-1 ceph-mon[80009]: pgmap v597: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:47:08 compute-1 podman[230234]: 2025-11-24 09:47:08.493361753 +0000 UTC m=+0.032628667 container died fe9899ea690749a9a0b3b5d5bfc012192a73e99876f03d91fe6d6c78aff266e9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 24 09:47:08 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fe9899ea690749a9a0b3b5d5bfc012192a73e99876f03d91fe6d6c78aff266e9-userdata-shm.mount: Deactivated successfully.
Nov 24 09:47:08 compute-1 systemd[1]: var-lib-containers-storage-overlay-73a160e91f30301b4dc4cb2eee53a414fa47f33567beea5b3af9dd52a065dbae-merged.mount: Deactivated successfully.
Nov 24 09:47:08 compute-1 sudo[230173]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:08 compute-1 podman[230234]: 2025-11-24 09:47:08.524953333 +0000 UTC m=+0.064220247 container cleanup fe9899ea690749a9a0b3b5d5bfc012192a73e99876f03d91fe6d6c78aff266e9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 09:47:08 compute-1 systemd[1]: libpod-conmon-fe9899ea690749a9a0b3b5d5bfc012192a73e99876f03d91fe6d6c78aff266e9.scope: Deactivated successfully.
Nov 24 09:47:08 compute-1 nova_compute[230010]: 2025-11-24 09:47:08.576 230014 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 09:47:08 compute-1 nova_compute[230010]: 2025-11-24 09:47:08.577 230014 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 09:47:08 compute-1 nova_compute[230010]: 2025-11-24 09:47:08.577 230014 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 09:47:08 compute-1 nova_compute[230010]: 2025-11-24 09:47:08.577 230014 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 24 09:47:08 compute-1 sudo[230285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:47:08 compute-1 sudo[230285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:47:08 compute-1 sudo[230285]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:08 compute-1 sudo[230310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Nov 24 09:47:08 compute-1 sudo[230310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:47:08 compute-1 nova_compute[230010]: 2025-11-24 09:47:08.728 230014 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:47:08 compute-1 nova_compute[230010]: 2025-11-24 09:47:08.754 230014 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:47:08 compute-1 nova_compute[230010]: 2025-11-24 09:47:08.754 230014 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 24 09:47:08 compute-1 sudo[230310]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:08 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:47:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:47:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:47:09 compute-1 sudo[230356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:47:09 compute-1 sudo[230356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:47:09 compute-1 sudo[230356]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:47:09 compute-1 sudo[230381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:47:09 compute-1 sudo[230381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:47:09 compute-1 sshd-session[200565]: Connection closed by 192.168.122.30 port 44340
Nov 24 09:47:09 compute-1 sshd-session[200562]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:47:09 compute-1 systemd[1]: session-53.scope: Deactivated successfully.
Nov 24 09:47:09 compute-1 systemd[1]: session-53.scope: Consumed 2min 13.766s CPU time.
Nov 24 09:47:09 compute-1 systemd-logind[823]: Session 53 logged out. Waiting for processes to exit.
Nov 24 09:47:09 compute-1 systemd-logind[823]: Removed session 53.
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.209 230014 INFO nova.virt.driver [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 24 09:47:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:09.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.303 230014 INFO nova.compute.provider_config [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.309 230014 DEBUG oslo_concurrency.lockutils [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.309 230014 DEBUG oslo_concurrency.lockutils [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.309 230014 DEBUG oslo_concurrency.lockutils [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.309 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.310 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.310 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.310 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.310 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.310 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.310 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.311 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.311 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.311 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.311 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.311 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.311 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.312 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.312 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.312 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.312 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.312 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.312 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.312 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.313 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.313 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.313 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.313 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.313 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.313 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.314 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.314 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.314 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.314 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.314 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.314 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.315 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.315 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.315 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.315 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.315 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.315 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.316 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.316 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.316 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.316 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.316 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.317 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.317 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.317 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.317 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.317 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.317 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.317 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.318 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.318 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.318 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.318 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.318 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.318 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.319 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.319 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.319 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.319 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.319 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.319 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.320 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.320 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.320 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.320 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.320 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.320 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.321 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.321 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.321 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.321 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.321 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.322 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.322 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.322 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.322 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.322 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.323 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.323 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.323 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.323 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.323 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.323 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.324 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.324 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.324 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.324 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.324 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.325 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.325 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.325 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.325 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.325 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.325 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.326 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.326 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.326 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.326 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.326 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.326 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.326 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.327 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.327 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.327 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.327 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.327 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.327 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.327 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.328 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.328 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.328 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.328 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.328 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.328 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.329 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.329 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.329 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.329 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.329 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.329 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.329 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.330 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.330 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.330 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.330 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.330 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.330 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.330 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.331 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.331 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.331 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.331 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.331 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.331 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.331 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.332 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.332 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.332 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.332 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.332 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.332 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.333 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.333 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.333 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.333 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.333 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.333 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.334 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.334 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.334 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.334 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.334 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.334 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.334 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.335 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.335 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.335 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.335 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.335 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.335 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.336 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.336 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.336 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.336 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.336 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.337 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.337 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.337 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.337 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.337 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.337 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.337 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.338 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.338 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.338 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.338 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.338 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.338 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.338 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.339 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.339 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.339 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.339 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.339 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.339 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.339 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.340 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.340 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.340 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.340 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.340 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.340 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.340 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.341 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.341 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.341 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.341 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.341 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.341 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.342 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.342 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.342 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.342 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.342 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.343 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.343 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.343 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.343 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.343 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.343 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.344 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.344 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.344 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.344 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.344 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.344 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.345 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.345 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.345 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.345 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.345 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.345 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.345 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.345 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.346 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.346 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.346 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.346 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.346 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.346 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.346 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.347 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.347 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.347 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.347 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.347 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.347 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.348 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.348 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.348 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.348 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.348 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.348 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.348 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.349 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.349 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.349 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.349 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.349 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.349 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.349 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.350 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.350 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.350 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.350 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.350 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.350 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.351 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.351 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.351 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.351 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.351 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.351 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.352 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.352 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.352 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.352 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.352 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.352 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.352 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.353 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.353 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.353 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.353 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.353 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.353 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.354 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.354 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.354 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.354 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.354 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.354 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.354 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.355 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.355 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.355 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.355 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.355 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.355 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.355 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.356 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.356 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.356 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.356 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.356 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.356 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.356 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.357 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.357 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.357 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.357 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.357 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.357 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.357 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.358 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.358 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.358 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.358 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.358 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.358 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.358 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.359 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.359 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.359 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.359 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.359 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.359 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.359 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.360 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.360 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.360 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.360 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.360 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.360 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.360 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.361 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.361 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.361 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.361 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.361 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.361 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.361 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.362 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.362 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.362 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.362 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.362 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.362 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.362 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.363 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.363 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.363 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.363 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.363 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.363 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.363 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.364 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.364 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.364 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.364 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.364 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.364 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.364 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.365 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.365 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.365 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.365 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.365 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.365 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.366 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.366 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.366 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.366 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.366 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.366 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.366 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.367 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.367 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.367 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.367 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.367 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.367 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.367 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.368 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.368 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.368 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.368 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.368 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.368 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.368 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.369 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.369 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.369 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.369 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.369 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.369 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.370 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.370 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.370 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.370 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.370 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.370 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.371 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.371 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.371 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.371 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.371 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.371 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.371 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.372 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.372 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.372 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.372 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.372 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.372 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.372 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.373 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.373 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.373 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.373 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.373 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.373 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.373 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.374 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.374 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.374 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.374 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.374 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.374 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.374 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.375 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.375 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.375 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.375 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.375 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.375 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.375 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.375 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.376 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.376 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.376 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.376 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.376 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.376 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.376 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.377 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.377 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.377 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.377 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.377 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.377 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.377 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.378 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.378 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.378 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.378 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.378 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.378 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.378 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.379 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.379 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.379 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.379 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.379 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.379 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.379 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.380 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.380 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.380 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.380 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.380 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.380 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.380 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.381 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.381 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.381 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.381 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.381 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.381 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.381 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.382 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.382 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.382 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.382 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.382 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.382 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.383 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.383 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.383 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.383 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.383 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.383 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.383 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.383 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.384 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.384 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.384 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.384 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.384 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.384 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.385 230014 WARNING oslo_config.cfg [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 24 09:47:09 compute-1 nova_compute[230010]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 24 09:47:09 compute-1 nova_compute[230010]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 24 09:47:09 compute-1 nova_compute[230010]: and ``live_migration_inbound_addr`` respectively.
Nov 24 09:47:09 compute-1 nova_compute[230010]: ).  Its value may be silently ignored in the future.
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.385 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.385 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.385 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.385 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.386 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.386 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.386 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.386 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.386 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.386 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.387 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.387 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.387 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.387 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.387 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.387 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.387 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.388 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.388 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.rbd_secret_uuid        = 84a084c3-61a7-5de7-8207-1f88efa59a64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.388 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.388 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.388 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.388 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.388 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.389 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.389 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.389 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.389 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.389 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.389 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.389 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.390 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.390 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.390 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.390 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.390 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.390 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.391 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.391 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.391 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.391 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.391 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.391 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.391 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.392 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.392 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.392 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.392 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.392 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.392 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.393 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.393 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.393 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.393 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.393 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.393 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.393 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.394 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.394 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.394 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.394 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.394 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.394 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.394 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.395 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.395 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.395 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.395 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.395 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.395 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.395 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.396 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.396 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.396 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.396 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.396 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.396 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.396 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.397 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.397 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.397 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.397 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.397 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.398 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.398 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.398 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.398 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.398 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.399 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.399 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.399 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.399 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.399 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.400 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.400 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.400 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.400 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.400 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.400 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.401 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.401 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.401 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.401 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.401 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.401 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.401 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.402 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.402 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.402 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.402 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.402 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.402 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.402 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.402 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.403 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.403 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.403 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.403 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.403 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.403 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.403 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.404 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.404 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.404 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.404 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.404 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.404 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.404 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.405 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.405 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.405 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.405 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.405 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.405 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.405 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.406 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.406 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.406 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.406 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.406 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.406 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.407 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.407 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.407 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.407 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.407 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.407 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.407 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.408 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.408 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.408 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.408 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.408 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.408 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.409 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.409 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.409 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.409 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.409 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.409 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.409 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.410 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.410 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.410 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.410 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.410 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.410 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.410 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.411 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.411 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.411 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.411 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.411 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.411 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.411 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.412 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.412 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.412 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.412 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.412 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.412 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.413 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.413 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.413 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.413 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.413 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.413 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.413 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.414 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.414 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.414 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.414 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.414 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.414 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.414 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.415 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.415 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.415 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.415 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.415 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.416 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.416 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.416 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.416 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.416 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.416 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.416 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.417 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.417 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.417 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.417 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.417 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.417 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.418 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.418 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.418 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.418 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.418 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.418 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.418 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.419 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.419 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.419 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.419 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.419 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.419 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.419 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.420 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.420 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.420 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.420 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.420 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.420 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.420 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.421 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.421 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.421 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.421 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.421 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.421 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.421 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.422 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.422 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.422 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.422 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.422 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.422 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.423 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.423 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.423 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.423 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.423 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.423 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.424 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.424 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.424 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.424 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.424 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.424 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.425 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.425 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.425 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.425 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.425 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.425 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.426 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.426 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.426 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.426 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.426 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.426 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.427 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.427 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.427 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.427 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.427 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.427 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.427 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.428 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.428 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.428 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.428 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.428 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.428 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.428 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.429 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.429 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.429 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.429 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.429 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.429 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.430 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.430 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.430 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.430 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.430 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.430 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.430 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.431 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.431 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.431 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.431 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.431 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.431 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.431 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.432 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.432 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.432 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.432 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.432 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.432 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.433 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.433 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.433 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.433 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.433 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.433 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.433 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.434 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.434 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.434 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.434 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.434 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.434 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.434 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.435 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.435 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.435 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.435 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.435 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.435 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.435 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.436 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.436 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.436 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.436 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.436 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.437 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.437 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.437 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.437 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.438 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.438 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.438 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.438 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.438 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.438 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.438 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.439 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.439 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.439 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.439 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.439 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.439 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.439 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.440 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.440 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.440 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.440 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.440 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.440 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.440 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.440 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.441 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.441 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.441 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.441 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.441 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.441 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.442 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.442 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.442 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.442 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.442 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.442 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.442 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.443 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.443 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.443 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.443 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.443 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.443 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.443 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.444 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.444 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.444 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.444 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.444 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.444 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.444 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.445 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.445 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.445 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.445 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.445 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.445 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.446 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.446 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.446 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.446 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.446 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.446 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.447 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.447 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.447 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.447 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.447 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.447 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.448 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.448 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.448 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.448 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.448 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.448 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.448 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.449 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.449 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.449 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.449 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.449 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.449 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.449 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.450 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.450 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.450 230014 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.463 230014 INFO nova.virt.node [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Determined node identity 1b7b0f22-dba8-42a8-9de3-763c9152946e from /var/lib/nova/compute_id
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.464 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.465 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.465 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.465 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.477 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f4597019b20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.479 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f4597019b20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.480 230014 INFO nova.virt.libvirt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Connection event '1' reason 'None'
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.488 230014 INFO nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Libvirt host capabilities <capabilities>
Nov 24 09:47:09 compute-1 nova_compute[230010]: 
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <host>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <uuid>719139db-46ba-4050-a77b-5fa732a73807</uuid>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <cpu>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <arch>x86_64</arch>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model>EPYC-Rome-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <vendor>AMD</vendor>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <microcode version='16777317'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <signature family='23' model='49' stepping='0'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature name='x2apic'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature name='tsc-deadline'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature name='osxsave'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature name='hypervisor'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature name='tsc_adjust'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature name='spec-ctrl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature name='stibp'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature name='arch-capabilities'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature name='ssbd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature name='cmp_legacy'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature name='topoext'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature name='virt-ssbd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature name='lbrv'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature name='tsc-scale'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature name='vmcb-clean'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature name='pause-filter'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature name='pfthreshold'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature name='svme-addr-chk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature name='rdctl-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature name='skip-l1dfl-vmentry'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature name='mds-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature name='pschange-mc-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <pages unit='KiB' size='4'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <pages unit='KiB' size='2048'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <pages unit='KiB' size='1048576'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </cpu>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <power_management>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <suspend_mem/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </power_management>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <iommu support='no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <migration_features>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <live/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <uri_transports>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <uri_transport>tcp</uri_transport>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <uri_transport>rdma</uri_transport>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </uri_transports>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </migration_features>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <topology>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <cells num='1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <cell id='0'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:           <memory unit='KiB'>7864320</memory>
Nov 24 09:47:09 compute-1 nova_compute[230010]:           <pages unit='KiB' size='4'>1966080</pages>
Nov 24 09:47:09 compute-1 nova_compute[230010]:           <pages unit='KiB' size='2048'>0</pages>
Nov 24 09:47:09 compute-1 nova_compute[230010]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 24 09:47:09 compute-1 nova_compute[230010]:           <distances>
Nov 24 09:47:09 compute-1 nova_compute[230010]:             <sibling id='0' value='10'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:           </distances>
Nov 24 09:47:09 compute-1 nova_compute[230010]:           <cpus num='8'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:           </cpus>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         </cell>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </cells>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </topology>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <cache>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </cache>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <secmodel>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model>selinux</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <doi>0</doi>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </secmodel>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <secmodel>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model>dac</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <doi>0</doi>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </secmodel>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   </host>
Nov 24 09:47:09 compute-1 nova_compute[230010]: 
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <guest>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <os_type>hvm</os_type>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <arch name='i686'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <wordsize>32</wordsize>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <domain type='qemu'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <domain type='kvm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </arch>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <features>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <pae/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <nonpae/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <acpi default='on' toggle='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <apic default='on' toggle='no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <cpuselection/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <deviceboot/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <disksnapshot default='on' toggle='no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <externalSnapshot/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </features>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   </guest>
Nov 24 09:47:09 compute-1 nova_compute[230010]: 
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <guest>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <os_type>hvm</os_type>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <arch name='x86_64'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <wordsize>64</wordsize>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <domain type='qemu'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <domain type='kvm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </arch>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <features>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <acpi default='on' toggle='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <apic default='on' toggle='no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <cpuselection/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <deviceboot/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <disksnapshot default='on' toggle='no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <externalSnapshot/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </features>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   </guest>
Nov 24 09:47:09 compute-1 nova_compute[230010]: 
Nov 24 09:47:09 compute-1 nova_compute[230010]: </capabilities>
Nov 24 09:47:09 compute-1 nova_compute[230010]: 
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.493 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.495 230014 DEBUG nova.virt.libvirt.volume.mount [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.498 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 24 09:47:09 compute-1 nova_compute[230010]: <domainCapabilities>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <domain>kvm</domain>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <arch>i686</arch>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <vcpu max='4096'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <iothreads supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <os supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <enum name='firmware'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <loader supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='type'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>rom</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>pflash</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='readonly'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>yes</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>no</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='secure'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>no</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </loader>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   </os>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <cpu>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <mode name='host-passthrough' supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='hostPassthroughMigratable'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>on</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>off</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </mode>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <mode name='maximum' supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='maximumMigratable'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>on</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>off</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </mode>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <mode name='host-model' supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <vendor>AMD</vendor>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='x2apic'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='hypervisor'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='stibp'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='ssbd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='overflow-recov'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='succor'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='ibrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='lbrv'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='tsc-scale'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='flushbyasid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='pause-filter'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='pfthreshold'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='disable' name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </mode>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <mode name='custom' supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-noTSX'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cooperlake'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cooperlake-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cooperlake-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Denverton'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Denverton-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Denverton-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Denverton-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Dhyana-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Genoa'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amd-psfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='auto-ibrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='stibp-always-on'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amd-psfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='auto-ibrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='stibp-always-on'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Milan'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Milan-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Milan-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amd-psfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='stibp-always-on'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Rome'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Rome-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Rome-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Rome-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='GraniteRapids'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='prefetchiti'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='GraniteRapids-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='prefetchiti'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='GraniteRapids-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx10'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx10-128'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx10-256'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx10-512'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='prefetchiti'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-noTSX'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v5'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v6'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v7'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='IvyBridge'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='IvyBridge-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='IvyBridge-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='IvyBridge-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='KnightsMill'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512er'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512pf'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='KnightsMill-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512er'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512pf'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Opteron_G4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xop'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Opteron_G4-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xop'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Opteron_G5'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tbm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xop'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Opteron_G5-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tbm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xop'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='SapphireRapids'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='SapphireRapids-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='SapphireRapids-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='SapphireRapids-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='SierraForest'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cmpccxadd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='SierraForest-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cmpccxadd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-v5'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Snowridge'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Snowridge-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Snowridge-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Snowridge-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Snowridge-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='athlon'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='athlon-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='core2duo'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='core2duo-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='coreduo'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='coreduo-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='n270'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='n270-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='phenom'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='phenom-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </mode>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   </cpu>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <memoryBacking supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <enum name='sourceType'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <value>file</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <value>anonymous</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <value>memfd</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   </memoryBacking>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <devices>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <disk supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='diskDevice'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>disk</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>cdrom</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>floppy</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>lun</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='bus'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>fdc</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>scsi</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>usb</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>sata</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='model'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio-transitional</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio-non-transitional</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </disk>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <graphics supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='type'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>vnc</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>egl-headless</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>dbus</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </graphics>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <video supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='modelType'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>vga</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>cirrus</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>none</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>bochs</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>ramfb</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </video>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <hostdev supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='mode'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>subsystem</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='startupPolicy'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>default</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>mandatory</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>requisite</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>optional</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='subsysType'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>usb</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>pci</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>scsi</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='capsType'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='pciBackend'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </hostdev>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <rng supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='model'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio-transitional</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio-non-transitional</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='backendModel'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>random</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>egd</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>builtin</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </rng>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <filesystem supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='driverType'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>path</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>handle</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtiofs</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </filesystem>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <tpm supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='model'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>tpm-tis</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>tpm-crb</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='backendModel'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>emulator</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>external</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='backendVersion'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>2.0</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </tpm>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <redirdev supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='bus'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>usb</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </redirdev>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <channel supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='type'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>pty</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>unix</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </channel>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <crypto supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='model'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='type'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>qemu</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='backendModel'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>builtin</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </crypto>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <interface supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='backendType'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>default</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>passt</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </interface>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <panic supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='model'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>isa</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>hyperv</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </panic>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <console supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='type'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>null</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>vc</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>pty</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>dev</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>file</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>pipe</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>stdio</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>udp</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>tcp</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>unix</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>qemu-vdagent</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>dbus</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </console>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   </devices>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <features>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <gic supported='no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <vmcoreinfo supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <genid supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <backingStoreInput supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <backup supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <async-teardown supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <ps2 supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <sev supported='no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <sgx supported='no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <hyperv supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='features'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>relaxed</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>vapic</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>spinlocks</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>vpindex</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>runtime</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>synic</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>stimer</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>reset</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>vendor_id</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>frequencies</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>reenlightenment</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>tlbflush</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>ipi</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>avic</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>emsr_bitmap</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>xmm_input</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <defaults>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <spinlocks>4095</spinlocks>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <stimer_direct>on</stimer_direct>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </defaults>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </hyperv>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <launchSecurity supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='sectype'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>tdx</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </launchSecurity>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   </features>
Nov 24 09:47:09 compute-1 nova_compute[230010]: </domainCapabilities>
Nov 24 09:47:09 compute-1 nova_compute[230010]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.504 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 24 09:47:09 compute-1 nova_compute[230010]: <domainCapabilities>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <domain>kvm</domain>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <arch>i686</arch>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <vcpu max='240'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <iothreads supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <os supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <enum name='firmware'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <loader supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='type'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>rom</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>pflash</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='readonly'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>yes</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>no</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='secure'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>no</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </loader>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   </os>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <cpu>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <mode name='host-passthrough' supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='hostPassthroughMigratable'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>on</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>off</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </mode>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <mode name='maximum' supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='maximumMigratable'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>on</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>off</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </mode>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <mode name='host-model' supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <vendor>AMD</vendor>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='x2apic'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='hypervisor'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='stibp'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='ssbd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='overflow-recov'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='succor'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='ibrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='lbrv'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='tsc-scale'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='flushbyasid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='pause-filter'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='pfthreshold'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='disable' name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </mode>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <mode name='custom' supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-noTSX'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cooperlake'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cooperlake-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cooperlake-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Denverton'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Denverton-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Denverton-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Denverton-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Dhyana-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Genoa'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amd-psfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='auto-ibrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='stibp-always-on'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amd-psfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='auto-ibrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='stibp-always-on'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Milan'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Milan-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Milan-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amd-psfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='stibp-always-on'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Rome'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Rome-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Rome-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Rome-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='GraniteRapids'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='prefetchiti'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='GraniteRapids-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='prefetchiti'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='GraniteRapids-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx10'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx10-128'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx10-256'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx10-512'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='prefetchiti'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-noTSX'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v5'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v6'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v7'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='IvyBridge'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='IvyBridge-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='IvyBridge-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='IvyBridge-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='KnightsMill'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512er'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512pf'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='KnightsMill-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512er'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512pf'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Opteron_G4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xop'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Opteron_G4-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xop'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Opteron_G5'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tbm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xop'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Opteron_G5-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tbm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xop'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='SapphireRapids'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='SapphireRapids-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='SapphireRapids-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='SapphireRapids-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='SierraForest'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cmpccxadd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='SierraForest-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cmpccxadd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-v5'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Snowridge'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Snowridge-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Snowridge-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Snowridge-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Snowridge-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='athlon'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='athlon-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='core2duo'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='core2duo-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='coreduo'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='coreduo-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='n270'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='n270-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='phenom'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='phenom-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </mode>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   </cpu>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <memoryBacking supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <enum name='sourceType'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <value>file</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <value>anonymous</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <value>memfd</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   </memoryBacking>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <devices>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <disk supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='diskDevice'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>disk</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>cdrom</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>floppy</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>lun</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='bus'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>ide</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>fdc</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>scsi</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>usb</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>sata</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='model'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio-transitional</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio-non-transitional</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </disk>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <graphics supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='type'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>vnc</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>egl-headless</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>dbus</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </graphics>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <video supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='modelType'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>vga</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>cirrus</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>none</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>bochs</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>ramfb</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </video>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <hostdev supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='mode'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>subsystem</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='startupPolicy'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>default</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>mandatory</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>requisite</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>optional</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='subsysType'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>usb</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>pci</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>scsi</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='capsType'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='pciBackend'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </hostdev>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <rng supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='model'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio-transitional</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio-non-transitional</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='backendModel'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>random</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>egd</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>builtin</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </rng>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <filesystem supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='driverType'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>path</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>handle</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtiofs</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </filesystem>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <tpm supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='model'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>tpm-tis</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>tpm-crb</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='backendModel'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>emulator</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>external</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='backendVersion'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>2.0</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </tpm>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <redirdev supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='bus'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>usb</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </redirdev>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <channel supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='type'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>pty</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>unix</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </channel>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <crypto supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='model'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='type'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>qemu</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='backendModel'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>builtin</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </crypto>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <interface supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='backendType'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>default</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>passt</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </interface>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <panic supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='model'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>isa</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>hyperv</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </panic>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <console supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='type'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>null</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>vc</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>pty</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>dev</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>file</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>pipe</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>stdio</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>udp</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>tcp</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>unix</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>qemu-vdagent</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>dbus</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </console>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   </devices>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <features>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <gic supported='no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <vmcoreinfo supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <genid supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <backingStoreInput supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <backup supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <async-teardown supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <ps2 supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <sev supported='no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <sgx supported='no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <hyperv supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='features'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>relaxed</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>vapic</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>spinlocks</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>vpindex</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>runtime</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>synic</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>stimer</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>reset</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>vendor_id</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>frequencies</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>reenlightenment</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>tlbflush</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>ipi</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>avic</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>emsr_bitmap</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>xmm_input</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <defaults>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <spinlocks>4095</spinlocks>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <stimer_direct>on</stimer_direct>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </defaults>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </hyperv>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <launchSecurity supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='sectype'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>tdx</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </launchSecurity>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   </features>
Nov 24 09:47:09 compute-1 nova_compute[230010]: </domainCapabilities>
Nov 24 09:47:09 compute-1 nova_compute[230010]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.529 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.533 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 24 09:47:09 compute-1 nova_compute[230010]: <domainCapabilities>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <domain>kvm</domain>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <arch>x86_64</arch>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <vcpu max='4096'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <iothreads supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <os supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <enum name='firmware'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <value>efi</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <loader supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='type'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>rom</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>pflash</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='readonly'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>yes</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>no</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='secure'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>yes</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>no</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </loader>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   </os>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <cpu>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <mode name='host-passthrough' supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='hostPassthroughMigratable'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>on</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>off</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </mode>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <mode name='maximum' supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='maximumMigratable'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>on</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>off</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </mode>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <mode name='host-model' supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <vendor>AMD</vendor>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='x2apic'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='hypervisor'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='stibp'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='ssbd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='overflow-recov'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='succor'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='ibrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='lbrv'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='tsc-scale'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='flushbyasid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='pause-filter'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='pfthreshold'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='disable' name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </mode>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <mode name='custom' supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-noTSX'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 sudo[230381]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cooperlake'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cooperlake-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cooperlake-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Denverton'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Denverton-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Denverton-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Denverton-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Dhyana-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Genoa'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amd-psfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='auto-ibrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='stibp-always-on'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amd-psfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='auto-ibrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='stibp-always-on'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Milan'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Milan-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Milan-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amd-psfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='stibp-always-on'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Rome'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Rome-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Rome-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Rome-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='GraniteRapids'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='prefetchiti'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='GraniteRapids-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='prefetchiti'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='GraniteRapids-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx10'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx10-128'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx10-256'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx10-512'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='prefetchiti'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-noTSX'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v5'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v6'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v7'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='IvyBridge'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='IvyBridge-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='IvyBridge-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='IvyBridge-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='KnightsMill'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512er'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512pf'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='KnightsMill-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512er'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512pf'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Opteron_G4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xop'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Opteron_G4-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xop'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Opteron_G5'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tbm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xop'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Opteron_G5-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tbm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xop'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='SapphireRapids'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='SapphireRapids-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='SapphireRapids-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='SapphireRapids-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='SierraForest'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cmpccxadd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='SierraForest-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cmpccxadd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-v5'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Snowridge'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Snowridge-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Snowridge-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Snowridge-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Snowridge-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='athlon'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='athlon-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='core2duo'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='core2duo-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='coreduo'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='coreduo-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='n270'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='n270-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='phenom'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='phenom-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </mode>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   </cpu>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <memoryBacking supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <enum name='sourceType'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <value>file</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <value>anonymous</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <value>memfd</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   </memoryBacking>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <devices>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <disk supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='diskDevice'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>disk</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>cdrom</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>floppy</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>lun</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='bus'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>fdc</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>scsi</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>usb</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>sata</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='model'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio-transitional</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio-non-transitional</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </disk>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <graphics supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='type'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>vnc</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>egl-headless</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>dbus</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </graphics>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <video supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='modelType'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>vga</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>cirrus</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>none</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>bochs</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>ramfb</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </video>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <hostdev supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='mode'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>subsystem</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='startupPolicy'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>default</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>mandatory</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>requisite</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>optional</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='subsysType'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>usb</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>pci</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>scsi</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='capsType'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='pciBackend'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </hostdev>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <rng supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='model'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio-transitional</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio-non-transitional</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='backendModel'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>random</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>egd</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>builtin</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </rng>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <filesystem supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='driverType'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>path</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>handle</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtiofs</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </filesystem>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <tpm supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='model'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>tpm-tis</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>tpm-crb</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='backendModel'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>emulator</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>external</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='backendVersion'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>2.0</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </tpm>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <redirdev supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='bus'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>usb</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </redirdev>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <channel supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='type'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>pty</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>unix</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </channel>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <crypto supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='model'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='type'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>qemu</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='backendModel'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>builtin</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </crypto>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <interface supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='backendType'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>default</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>passt</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </interface>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <panic supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='model'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>isa</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>hyperv</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </panic>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <console supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='type'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>null</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>vc</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>pty</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>dev</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>file</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>pipe</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>stdio</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>udp</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>tcp</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>unix</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>qemu-vdagent</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>dbus</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </console>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   </devices>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <features>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <gic supported='no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <vmcoreinfo supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <genid supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <backingStoreInput supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <backup supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <async-teardown supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <ps2 supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <sev supported='no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <sgx supported='no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <hyperv supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='features'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>relaxed</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>vapic</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>spinlocks</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>vpindex</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>runtime</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>synic</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>stimer</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>reset</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>vendor_id</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>frequencies</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>reenlightenment</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>tlbflush</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>ipi</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>avic</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>emsr_bitmap</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>xmm_input</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <defaults>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <spinlocks>4095</spinlocks>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <stimer_direct>on</stimer_direct>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </defaults>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </hyperv>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <launchSecurity supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='sectype'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>tdx</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </launchSecurity>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   </features>
Nov 24 09:47:09 compute-1 nova_compute[230010]: </domainCapabilities>
Nov 24 09:47:09 compute-1 nova_compute[230010]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.588 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 24 09:47:09 compute-1 nova_compute[230010]: <domainCapabilities>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <domain>kvm</domain>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <arch>x86_64</arch>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <vcpu max='240'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <iothreads supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <os supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <enum name='firmware'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <loader supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='type'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>rom</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>pflash</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='readonly'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>yes</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>no</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='secure'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>no</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </loader>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   </os>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <cpu>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <mode name='host-passthrough' supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='hostPassthroughMigratable'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>on</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>off</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </mode>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <mode name='maximum' supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='maximumMigratable'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>on</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>off</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </mode>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <mode name='host-model' supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <vendor>AMD</vendor>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='x2apic'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='hypervisor'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='stibp'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='ssbd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='overflow-recov'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='succor'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='ibrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='lbrv'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='tsc-scale'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='flushbyasid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='pause-filter'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='pfthreshold'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <feature policy='disable' name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </mode>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <mode name='custom' supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-noTSX'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Broadwell-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cooperlake'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cooperlake-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Cooperlake-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Denverton'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Denverton-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Denverton-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Denverton-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Dhyana-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Genoa'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amd-psfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='auto-ibrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='stibp-always-on'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amd-psfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='auto-ibrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='stibp-always-on'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Milan'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Milan-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Milan-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amd-psfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='stibp-always-on'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Rome'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Rome-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Rome-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-Rome-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='EPYC-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='GraniteRapids'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='prefetchiti'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='GraniteRapids-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='prefetchiti'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='GraniteRapids-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx10'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx10-128'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx10-256'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx10-512'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='prefetchiti'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-noTSX'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Haswell-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v5'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v6'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Icelake-Server-v7'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='IvyBridge'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='IvyBridge-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='IvyBridge-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='IvyBridge-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='KnightsMill'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512er'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512pf'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='KnightsMill-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512er'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512pf'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Opteron_G4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xop'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Opteron_G4-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xop'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Opteron_G5'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tbm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xop'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Opteron_G5-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tbm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xop'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='SapphireRapids'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='SapphireRapids-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='SapphireRapids-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='SapphireRapids-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='la57'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='SierraForest'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cmpccxadd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='SierraForest-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-ifma'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cmpccxadd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Client-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='hle'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Skylake-Server-v5'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='pku'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Snowridge'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Snowridge-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Snowridge-v2'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Snowridge-v3'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='Snowridge-v4'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='erms'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='athlon'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='athlon-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='core2duo'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='core2duo-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='coreduo'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='coreduo-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='n270'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='n270-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='ss'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='phenom'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <blockers model='phenom-v1'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </blockers>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </mode>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   </cpu>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <memoryBacking supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <enum name='sourceType'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <value>file</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <value>anonymous</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <value>memfd</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   </memoryBacking>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <devices>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <disk supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='diskDevice'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>disk</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>cdrom</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>floppy</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>lun</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='bus'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>ide</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>fdc</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>scsi</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>usb</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>sata</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='model'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio-transitional</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio-non-transitional</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </disk>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <graphics supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='type'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>vnc</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>egl-headless</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>dbus</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </graphics>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <video supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='modelType'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>vga</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>cirrus</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>none</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>bochs</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>ramfb</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </video>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <hostdev supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='mode'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>subsystem</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='startupPolicy'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>default</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>mandatory</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>requisite</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>optional</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='subsysType'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>usb</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>pci</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>scsi</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='capsType'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='pciBackend'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </hostdev>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <rng supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='model'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio-transitional</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtio-non-transitional</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='backendModel'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>random</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>egd</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>builtin</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </rng>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <filesystem supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='driverType'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>path</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>handle</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>virtiofs</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </filesystem>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <tpm supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='model'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>tpm-tis</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>tpm-crb</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='backendModel'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>emulator</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>external</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='backendVersion'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>2.0</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </tpm>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <redirdev supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='bus'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>usb</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </redirdev>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <channel supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='type'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>pty</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>unix</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </channel>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <crypto supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='model'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='type'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>qemu</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='backendModel'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>builtin</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </crypto>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <interface supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='backendType'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>default</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>passt</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </interface>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <panic supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='model'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>isa</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>hyperv</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </panic>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <console supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='type'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>null</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>vc</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>pty</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>dev</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>file</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>pipe</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>stdio</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>udp</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>tcp</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>unix</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>qemu-vdagent</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>dbus</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </console>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   </devices>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   <features>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <gic supported='no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <vmcoreinfo supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <genid supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <backingStoreInput supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <backup supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <async-teardown supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <ps2 supported='yes'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <sev supported='no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <sgx supported='no'/>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <hyperv supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='features'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>relaxed</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>vapic</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>spinlocks</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>vpindex</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>runtime</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>synic</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>stimer</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>reset</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>vendor_id</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>frequencies</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>reenlightenment</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>tlbflush</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>ipi</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>avic</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>emsr_bitmap</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>xmm_input</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <defaults>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <spinlocks>4095</spinlocks>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <stimer_direct>on</stimer_direct>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </defaults>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </hyperv>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     <launchSecurity supported='yes'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       <enum name='sectype'>
Nov 24 09:47:09 compute-1 nova_compute[230010]:         <value>tdx</value>
Nov 24 09:47:09 compute-1 nova_compute[230010]:       </enum>
Nov 24 09:47:09 compute-1 nova_compute[230010]:     </launchSecurity>
Nov 24 09:47:09 compute-1 nova_compute[230010]:   </features>
Nov 24 09:47:09 compute-1 nova_compute[230010]: </domainCapabilities>
Nov 24 09:47:09 compute-1 nova_compute[230010]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.660 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.661 230014 INFO nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Secure Boot support detected
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.663 230014 INFO nova.virt.libvirt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.663 230014 INFO nova.virt.libvirt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.672 230014 DEBUG nova.virt.libvirt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.687 230014 INFO nova.virt.node [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Determined node identity 1b7b0f22-dba8-42a8-9de3-763c9152946e from /var/lib/nova/compute_id
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.699 230014 WARNING nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Compute nodes ['1b7b0f22-dba8-42a8-9de3-763c9152946e'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.722 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.736 230014 WARNING nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.736 230014 DEBUG oslo_concurrency.lockutils [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.736 230014 DEBUG oslo_concurrency.lockutils [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.736 230014 DEBUG oslo_concurrency.lockutils [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.737 230014 DEBUG nova.compute.resource_tracker [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:47:09 compute-1 nova_compute[230010]: 2025-11-24 09:47:09.737 230014 DEBUG oslo_concurrency.processutils [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:47:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:47:09 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:47:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:47:09 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:47:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:47:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:09.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:47:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:47:09 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:47:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:47:09 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:47:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:47:09 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:47:10 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:47:10 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:47:10 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:47:10 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:47:10 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:47:10 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:47:10 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:47:10 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:47:10 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:47:10 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:47:10 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:47:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:47:10 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1422678948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:47:10 compute-1 nova_compute[230010]: 2025-11-24 09:47:10.195 230014 DEBUG oslo_concurrency.processutils [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:47:10 compute-1 rsyslogd[1005]: imjournal from <np0005533252:nova_compute>: begin to drop messages due to rate-limiting
Nov 24 09:47:10 compute-1 nova_compute[230010]: 2025-11-24 09:47:10.385 230014 WARNING nova.virt.libvirt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:47:10 compute-1 nova_compute[230010]: 2025-11-24 09:47:10.386 230014 DEBUG nova.compute.resource_tracker [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5245MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:47:10 compute-1 nova_compute[230010]: 2025-11-24 09:47:10.387 230014 DEBUG oslo_concurrency.lockutils [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:47:10 compute-1 nova_compute[230010]: 2025-11-24 09:47:10.387 230014 DEBUG oslo_concurrency.lockutils [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:47:10 compute-1 nova_compute[230010]: 2025-11-24 09:47:10.398 230014 WARNING nova.compute.resource_tracker [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] No compute node record for compute-1.ctlplane.example.com:1b7b0f22-dba8-42a8-9de3-763c9152946e: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 1b7b0f22-dba8-42a8-9de3-763c9152946e could not be found.
Nov 24 09:47:10 compute-1 nova_compute[230010]: 2025-11-24 09:47:10.415 230014 INFO nova.compute.resource_tracker [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Compute node record created for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com with uuid: 1b7b0f22-dba8-42a8-9de3-763c9152946e
Nov 24 09:47:10 compute-1 nova_compute[230010]: 2025-11-24 09:47:10.468 230014 DEBUG nova.compute.resource_tracker [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:47:10 compute-1 nova_compute[230010]: 2025-11-24 09:47:10.468 230014 DEBUG nova.compute.resource_tracker [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:47:11 compute-1 ceph-mon[80009]: pgmap v598: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:47:11 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1422678948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:47:11 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/4043361646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:47:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:11.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:11 compute-1 nova_compute[230010]: 2025-11-24 09:47:11.383 230014 INFO nova.scheduler.client.report [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [req-b0dea11a-245d-4e12-98a6-188afc959e3d] Created resource provider record via placement API for resource provider with UUID 1b7b0f22-dba8-42a8-9de3-763c9152946e and name compute-1.ctlplane.example.com.
Nov 24 09:47:11 compute-1 nova_compute[230010]: 2025-11-24 09:47:11.430 230014 DEBUG oslo_concurrency.processutils [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:47:11 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:47:11 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1945923946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:47:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:11.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:11 compute-1 nova_compute[230010]: 2025-11-24 09:47:11.891 230014 DEBUG oslo_concurrency.processutils [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:47:11 compute-1 nova_compute[230010]: 2025-11-24 09:47:11.896 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 24 09:47:11 compute-1 nova_compute[230010]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Nov 24 09:47:11 compute-1 nova_compute[230010]: 2025-11-24 09:47:11.897 230014 INFO nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] kernel doesn't support AMD SEV
Nov 24 09:47:11 compute-1 nova_compute[230010]: 2025-11-24 09:47:11.897 230014 DEBUG nova.compute.provider_tree [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Updating inventory in ProviderTree for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 09:47:11 compute-1 nova_compute[230010]: 2025-11-24 09:47:11.898 230014 DEBUG nova.virt.libvirt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 09:47:11 compute-1 nova_compute[230010]: 2025-11-24 09:47:11.933 230014 DEBUG nova.scheduler.client.report [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Updated inventory for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 24 09:47:11 compute-1 nova_compute[230010]: 2025-11-24 09:47:11.933 230014 DEBUG nova.compute.provider_tree [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Updating resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 24 09:47:11 compute-1 nova_compute[230010]: 2025-11-24 09:47:11.933 230014 DEBUG nova.compute.provider_tree [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Updating inventory in ProviderTree for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 09:47:12 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2145502785' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:47:12 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1599294403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:47:12 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1945923946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:47:12 compute-1 nova_compute[230010]: 2025-11-24 09:47:12.091 230014 DEBUG nova.compute.provider_tree [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Updating resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 24 09:47:12 compute-1 nova_compute[230010]: 2025-11-24 09:47:12.112 230014 DEBUG nova.compute.resource_tracker [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:47:12 compute-1 nova_compute[230010]: 2025-11-24 09:47:12.112 230014 DEBUG oslo_concurrency.lockutils [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:47:12 compute-1 nova_compute[230010]: 2025-11-24 09:47:12.112 230014 DEBUG nova.service [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Nov 24 09:47:12 compute-1 nova_compute[230010]: 2025-11-24 09:47:12.158 230014 DEBUG nova.service [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Nov 24 09:47:12 compute-1 nova_compute[230010]: 2025-11-24 09:47:12.158 230014 DEBUG nova.servicegroup.drivers.db [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] DB_Driver: join new ServiceGroup member compute-1.ctlplane.example.com to the compute group, service = <Service: host=compute-1.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Nov 24 09:47:13 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2021533179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:47:13 compute-1 ceph-mon[80009]: pgmap v599: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:47:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:47:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:13.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:47:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:13.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ef8000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:13 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0eec001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:47:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:47:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:47:14 compute-1 sudo[230520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:47:14 compute-1 sudo[230520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:47:14 compute-1 sudo[230520]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:14 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:14 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed8000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:15 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:47:15 compute-1 ceph-mon[80009]: pgmap v600: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:47:15 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:47:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:47:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:15.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:47:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:47:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:47:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:15.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:15 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed4000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094715 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:47:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:15 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ef0001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:47:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:16 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0eec002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:17 compute-1 ceph-mon[80009]: pgmap v601: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:47:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:17.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:17.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:17 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ef0001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:17 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed8001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:18 compute-1 podman[230547]: 2025-11-24 09:47:18.333239534 +0000 UTC m=+0.074291199 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 09:47:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:18 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed40016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:19 compute-1 ceph-mon[80009]: pgmap v602: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:47:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:19.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:47:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:19 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0eec002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:19.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:19 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ef00027d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:47:20.047 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:47:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:47:20.048 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:47:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:47:20.048 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:47:20 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:20 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed8001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:21 compute-1 ceph-mon[80009]: pgmap v603: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:47:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:21.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:21 compute-1 podman[230568]: 2025-11-24 09:47:21.343380283 +0000 UTC m=+0.082099775 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 09:47:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:21 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed4001fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:21.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:21 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0eec002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:22 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:22 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ef00027d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:23 compute-1 ceph-mon[80009]: pgmap v604: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:47:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:23.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:23 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed8001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:23.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:23 compute-1 sudo[230597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:47:23 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:23 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed4001fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:23 compute-1 sudo[230597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:47:23 compute-1 sudo[230597]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:47:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:24 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0eec002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:25 compute-1 ceph-mon[80009]: pgmap v605: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:47:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:25.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:25 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ef00034e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:25.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:25 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed8002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:26 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:26 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed4001fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:27 compute-1 ceph-mon[80009]: pgmap v606: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:47:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:47:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:27.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:47:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:27 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0eec002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:27.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:27 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ef00034e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:28 compute-1 podman[230625]: 2025-11-24 09:47:28.300717685 +0000 UTC m=+0.045781957 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 24 09:47:28 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:28 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed8002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:29.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:47:29 compute-1 ceph-mon[80009]: pgmap v607: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:47:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:29 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed40032f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:47:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:29.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:47:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:29 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0eec002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:47:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:47:30 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:30 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ef00041f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:31.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:31 compute-1 ceph-mon[80009]: pgmap v608: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:47:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:31 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed8002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:31 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed40032f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:31.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:32 compute-1 ceph-mon[80009]: pgmap v609: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:47:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:32 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0eec002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:33.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:33 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ef00041f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:33 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed8004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:33.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:47:34 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:34 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed4004000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:35 compute-1 ceph-mon[80009]: pgmap v610: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:35.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:35 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed4004000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:35 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ef00041f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:47:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:35.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:47:36 compute-1 ceph-osd[77497]: bluestore.MempoolThread fragmentation_score=0.000026 took=0.000038s
Nov 24 09:47:36 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:36 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed8004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:37 compute-1 ceph-mon[80009]: pgmap v611: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:37.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:37 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0eec002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:37 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed4004000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:37.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:38 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ef00041f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:47:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:39.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:39 compute-1 ceph-mon[80009]: pgmap v612: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:47:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:39 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed8004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:39 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0eec002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:39.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:40 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:40 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ef00041f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:41.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:41 compute-1 ceph-mon[80009]: pgmap v613: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:41 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/4101053094' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:47:41 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/4101053094' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:47:41 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 09:47:41 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1397509015' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:47:41 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 09:47:41 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1397509015' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:47:41 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 09:47:41 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4177524823' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:47:41 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 09:47:41 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4177524823' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:47:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:41 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed4004000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:41 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed8004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:41.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:42 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/1397509015' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:47:42 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/1397509015' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:47:42 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/4177524823' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:47:42 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/4177524823' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:47:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:42 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0eec002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:43.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:43 compute-1 ceph-mon[80009]: pgmap v614: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:47:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:43 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ef00041f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:43 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed4004000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:43.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:44 compute-1 sudo[230652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:47:44 compute-1 sudo[230652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:47:44 compute-1 sudo[230652]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:47:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:44 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed8004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:45.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:45 compute-1 ceph-mon[80009]: pgmap v615: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:47:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:47:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:45 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0eec002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:45 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ef00041f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:45.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:47:46 compute-1 ceph-mon[80009]: pgmap v616: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:46 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ebc000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:47.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:47 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed8004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:47 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0eec002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:47.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:48 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ef00041f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:49 compute-1 ceph-mon[80009]: pgmap v617: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:47:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:47:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:47:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:49.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:47:49 compute-1 podman[230681]: 2025-11-24 09:47:49.313063573 +0000 UTC m=+0.055365636 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:47:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:49 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ef00041f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:49 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed8004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:47:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:49.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:47:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:50 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0eec002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:51 compute-1 ceph-mon[80009]: pgmap v618: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:47:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:51.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:47:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:51 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ef00041f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:51 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ebc0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:51.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:52 compute-1 podman[230704]: 2025-11-24 09:47:52.335168033 +0000 UTC m=+0.079662364 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:47:52 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:52 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed8004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:53 compute-1 ceph-mon[80009]: pgmap v619: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:47:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:53.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:53 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0eec002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:53 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0eec002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:53.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:47:54 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:54 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ebc001fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:55 compute-1 ceph-mon[80009]: pgmap v620: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:55.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:55 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed8004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:55 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ef00041f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:55.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:56 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:56 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0eec002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:57 compute-1 ceph-mon[80009]: pgmap v621: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:57.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:57 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ebc001fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:57 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed8004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:57.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:58 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:58 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ef00041f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:47:59 compute-1 podman[230733]: 2025-11-24 09:47:59.299065151 +0000 UTC m=+0.043490268 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 24 09:47:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:47:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:59.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:47:59 compute-1 ceph-mon[80009]: pgmap v622: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 176 op/s
Nov 24 09:47:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:59 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0eec002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:59 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ebc001fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:47:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:59.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:48:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:48:00 compute-1 ceph-mon[80009]: pgmap v623: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 176 op/s
Nov 24 09:48:00 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:48:00 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed8004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:01.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:48:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:48:01 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ef00041f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:48:01 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0eec002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:01.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:02 compute-1 ceph-mon[80009]: pgmap v624: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 177 op/s
Nov 24 09:48:02 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:48:02 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ebc0032f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:03.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:48:03 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed8004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:03 compute-1 kernel: ganesha.nfsd[230510]: segfault at 50 ip 00007f0fa05df32e sp 00007f0f617f9210 error 4 in libntirpc.so.5.8[7f0fa05c4000+2c000] likely on CPU 2 (core 0, socket 2)
Nov 24 09:48:03 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:48:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:48:03 : epoch 69242995 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0ed8004050 fd 37 proxy ignored for local
Nov 24 09:48:03 compute-1 systemd[1]: Started Process Core Dump (PID 230755/UID 0).
Nov 24 09:48:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:03.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:04 compute-1 sudo[230757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:48:04 compute-1 sudo[230757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:48:04 compute-1 sudo[230757]: pam_unix(sudo:session): session closed for user root
Nov 24 09:48:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:48:05 compute-1 systemd-coredump[230756]: Process 229016 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 46:
                                                    #0  0x00007f0fa05df32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 24 09:48:05 compute-1 systemd[1]: systemd-coredump@10-230755-0.service: Deactivated successfully.
Nov 24 09:48:05 compute-1 systemd[1]: systemd-coredump@10-230755-0.service: Consumed 1.180s CPU time.
Nov 24 09:48:05 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 09:48:05 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 09:48:05 compute-1 podman[230787]: 2025-11-24 09:48:05.246260127 +0000 UTC m=+0.025018236 container died 43fd0b496718b6eeeae7d88ea5f91542ae91f5585f1476ce3ea629e7cd469e22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:48:05 compute-1 ceph-mon[80009]: pgmap v625: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 176 op/s
Nov 24 09:48:05 compute-1 systemd[1]: var-lib-containers-storage-overlay-81a4b1d9e246d85aca9deb7a685b356722e725e07097b58faac36c6d269f9e1d-merged.mount: Deactivated successfully.
Nov 24 09:48:05 compute-1 podman[230787]: 2025-11-24 09:48:05.285185233 +0000 UTC m=+0.063943342 container remove 43fd0b496718b6eeeae7d88ea5f91542ae91f5585f1476ce3ea629e7cd469e22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:48:05 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:48:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:05.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:05 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 09:48:05 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.422s CPU time.
Nov 24 09:48:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:05.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:06 compute-1 nova_compute[230010]: 2025-11-24 09:48:06.161 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:48:06 compute-1 nova_compute[230010]: 2025-11-24 09:48:06.184 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:48:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:07.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:07 compute-1 ceph-mon[80009]: pgmap v626: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 176 op/s
Nov 24 09:48:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:07.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:08 compute-1 nova_compute[230010]: 2025-11-24 09:48:08.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:48:08 compute-1 nova_compute[230010]: 2025-11-24 09:48:08.767 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:48:08 compute-1 nova_compute[230010]: 2025-11-24 09:48:08.767 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 09:48:08 compute-1 nova_compute[230010]: 2025-11-24 09:48:08.767 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 09:48:08 compute-1 nova_compute[230010]: 2025-11-24 09:48:08.789 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 09:48:08 compute-1 nova_compute[230010]: 2025-11-24 09:48:08.789 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:48:08 compute-1 nova_compute[230010]: 2025-11-24 09:48:08.789 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:48:08 compute-1 nova_compute[230010]: 2025-11-24 09:48:08.790 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:48:08 compute-1 nova_compute[230010]: 2025-11-24 09:48:08.790 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:48:08 compute-1 nova_compute[230010]: 2025-11-24 09:48:08.790 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:48:08 compute-1 nova_compute[230010]: 2025-11-24 09:48:08.790 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:48:08 compute-1 nova_compute[230010]: 2025-11-24 09:48:08.791 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 09:48:08 compute-1 nova_compute[230010]: 2025-11-24 09:48:08.791 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:48:08 compute-1 nova_compute[230010]: 2025-11-24 09:48:08.830 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:48:08 compute-1 nova_compute[230010]: 2025-11-24 09:48:08.830 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:48:08 compute-1 nova_compute[230010]: 2025-11-24 09:48:08.830 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:48:08 compute-1 nova_compute[230010]: 2025-11-24 09:48:08.830 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:48:08 compute-1 nova_compute[230010]: 2025-11-24 09:48:08.831 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:48:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:48:09 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3564109661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:48:09 compute-1 nova_compute[230010]: 2025-11-24 09:48:09.283 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:48:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:48:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:09.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:09 compute-1 ceph-mon[80009]: pgmap v627: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 176 op/s
Nov 24 09:48:09 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3564109661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:48:09 compute-1 nova_compute[230010]: 2025-11-24 09:48:09.431 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:48:09 compute-1 nova_compute[230010]: 2025-11-24 09:48:09.433 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5285MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:48:09 compute-1 nova_compute[230010]: 2025-11-24 09:48:09.433 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:48:09 compute-1 nova_compute[230010]: 2025-11-24 09:48:09.433 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:48:09 compute-1 nova_compute[230010]: 2025-11-24 09:48:09.798 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:48:09 compute-1 nova_compute[230010]: 2025-11-24 09:48:09.799 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:48:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094809 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:48:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:09.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:10 compute-1 nova_compute[230010]: 2025-11-24 09:48:10.191 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:48:10 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2959557878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:48:10 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/308233780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:48:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:48:10 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4255591343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:48:10 compute-1 nova_compute[230010]: 2025-11-24 09:48:10.640 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:48:10 compute-1 nova_compute[230010]: 2025-11-24 09:48:10.646 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:48:10 compute-1 nova_compute[230010]: 2025-11-24 09:48:10.666 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:48:10 compute-1 nova_compute[230010]: 2025-11-24 09:48:10.668 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:48:10 compute-1 nova_compute[230010]: 2025-11-24 09:48:10.668 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:48:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:11.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:11 compute-1 ceph-mon[80009]: pgmap v628: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:48:11 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/4255591343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:48:11 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/179360064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:48:11 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/579533964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:48:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:11.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:12 compute-1 ceph-mon[80009]: pgmap v629: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:48:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:13.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:13.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:48:14 compute-1 sudo[230880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:48:14 compute-1 sudo[230880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:48:14 compute-1 sudo[230880]: pam_unix(sudo:session): session closed for user root
Nov 24 09:48:14 compute-1 sudo[230905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:48:14 compute-1 sudo[230905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:48:15 compute-1 sudo[230905]: pam_unix(sudo:session): session closed for user root
Nov 24 09:48:15 compute-1 ceph-mon[80009]: pgmap v630: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:48:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:15.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:48:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:48:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:48:15 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:48:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:48:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:48:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:48:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:48:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:48:15 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:48:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:48:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:48:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:48:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:48:15 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 11.
Nov 24 09:48:15 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:48:15 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.422s CPU time.
Nov 24 09:48:15 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:48:15 compute-1 podman[231005]: 2025-11-24 09:48:15.702427134 +0000 UTC m=+0.038043007 container create 05c20cd827e532ef8524f3d7dc6b70bb0e75e399c723c3f02ef2e2e54e9f8309 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 09:48:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fddea2db1f8d25f3598b15284341926374a5845a0f9d9674c2a54b8006c3ae3/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:48:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fddea2db1f8d25f3598b15284341926374a5845a0f9d9674c2a54b8006c3ae3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:48:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fddea2db1f8d25f3598b15284341926374a5845a0f9d9674c2a54b8006c3ae3/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:48:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fddea2db1f8d25f3598b15284341926374a5845a0f9d9674c2a54b8006c3ae3/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:48:15 compute-1 podman[231005]: 2025-11-24 09:48:15.767069893 +0000 UTC m=+0.102685786 container init 05c20cd827e532ef8524f3d7dc6b70bb0e75e399c723c3f02ef2e2e54e9f8309 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 09:48:15 compute-1 podman[231005]: 2025-11-24 09:48:15.7734962 +0000 UTC m=+0.109112073 container start 05c20cd827e532ef8524f3d7dc6b70bb0e75e399c723c3f02ef2e2e54e9f8309 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:48:15 compute-1 bash[231005]: 05c20cd827e532ef8524f3d7dc6b70bb0e75e399c723c3f02ef2e2e54e9f8309
Nov 24 09:48:15 compute-1 podman[231005]: 2025-11-24 09:48:15.685716763 +0000 UTC m=+0.021332646 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:48:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:15 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:48:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:15 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:48:15 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:48:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:15 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:48:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:15 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:48:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:15 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:48:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:15 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:48:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:15 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:48:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:15 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:48:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:15.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:48:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:48:16 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:48:16 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:48:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:48:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:48:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:48:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:48:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:17.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:17 compute-1 ceph-mon[80009]: pgmap v631: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:48:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:17.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:19 compute-1 ceph-mon[80009]: pgmap v632: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:48:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:48:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:19.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:19.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:48:20.048 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:48:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:48:20.049 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:48:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:48:20.049 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:48:20 compute-1 podman[231065]: 2025-11-24 09:48:20.314873576 +0000 UTC m=+0.057600897 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Nov 24 09:48:20 compute-1 ceph-mon[80009]: pgmap v633: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:48:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:21.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:21 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:48:21 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:21 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:48:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:21.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:23 compute-1 ceph-mon[80009]: pgmap v634: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:48:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:23.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:23 compute-1 podman[231089]: 2025-11-24 09:48:23.333120333 +0000 UTC m=+0.076602105 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 09:48:23 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:48:23 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:48:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:24.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:24 compute-1 sudo[231117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:48:24 compute-1 sudo[231117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:48:24 compute-1 sudo[231117]: pam_unix(sudo:session): session closed for user root
Nov 24 09:48:24 compute-1 sudo[231142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:48:24 compute-1 sudo[231142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:48:24 compute-1 sudo[231142]: pam_unix(sudo:session): session closed for user root
Nov 24 09:48:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:48:24 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:48:24 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:48:24 compute-1 ceph-mon[80009]: pgmap v635: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:48:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:25.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:26.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:27 compute-1 ceph-mon[80009]: pgmap v636: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:48:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:27.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:27 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:27 : epoch 692429df : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09200016c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:28.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:28 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:28 : epoch 692429df : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:29 compute-1 ceph-mon[80009]: pgmap v637: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:48:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:48:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:29.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094829 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:48:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:29 : epoch 692429df : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0908000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:29 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:29 : epoch 692429df : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:30.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:30 compute-1 podman[231185]: 2025-11-24 09:48:30.310739836 +0000 UTC m=+0.050284437 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 24 09:48:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:48:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:48:30 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:30 : epoch 692429df : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0920001fe0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:31 compute-1 ceph-mon[80009]: pgmap v638: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:48:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:48:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:31.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:31 : epoch 692429df : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0908000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:31 : epoch 692429df : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0908000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:32.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:32 : epoch 692429df : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:33 compute-1 ceph-mon[80009]: pgmap v639: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:48:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:33.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:33 : epoch 692429df : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0920001fe0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:33 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:33 : epoch 692429df : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:34.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:48:34 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:34 : epoch 692429df : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0908001fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:35.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:35 compute-1 ceph-mon[80009]: pgmap v640: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:48:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:35 : epoch 692429df : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:35 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:35 : epoch 692429df : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0920001fe0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:48:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:36.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:48:36 compute-1 ceph-mon[80009]: pgmap v641: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:48:36 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:36 : epoch 692429df : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910002050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:37.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:37 : epoch 692429df : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:37 : epoch 692429df : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0908001fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:38.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:38 : epoch 692429df : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0920001fe0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:48:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:39.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:39 compute-1 ceph-mon[80009]: pgmap v642: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:48:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:39 : epoch 692429df : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910002050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:39 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:39 : epoch 692429df : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:40.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:40 compute-1 ceph-mon[80009]: pgmap v643: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:48:40 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:40 : epoch 692429df : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0908001fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:41.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:41 compute-1 kernel: ganesha.nfsd[231174]: segfault at 50 ip 00007f09e400932e sp 00007f09ad7f9210 error 4 in libntirpc.so.5.8[7f09e3fee000+2c000] likely on CPU 5 (core 0, socket 5)
Nov 24 09:48:41 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:48:41 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231020]: 24/11/2025 09:48:41 : epoch 692429df : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0920001fe0 fd 37 proxy ignored for local
Nov 24 09:48:41 compute-1 systemd[1]: Started Process Core Dump (PID 231210/UID 0).
Nov 24 09:48:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:42.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:42 compute-1 systemd-coredump[231211]: Process 231024 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 45:
                                                    #0  0x00007f09e400932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 24 09:48:43 compute-1 systemd[1]: systemd-coredump@11-231210-0.service: Deactivated successfully.
Nov 24 09:48:43 compute-1 podman[231216]: 2025-11-24 09:48:43.067435141 +0000 UTC m=+0.022985916 container died 05c20cd827e532ef8524f3d7dc6b70bb0e75e399c723c3f02ef2e2e54e9f8309 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:48:43 compute-1 systemd[1]: var-lib-containers-storage-overlay-3fddea2db1f8d25f3598b15284341926374a5845a0f9d9674c2a54b8006c3ae3-merged.mount: Deactivated successfully.
Nov 24 09:48:43 compute-1 podman[231216]: 2025-11-24 09:48:43.100259558 +0000 UTC m=+0.055810323 container remove 05c20cd827e532ef8524f3d7dc6b70bb0e75e399c723c3f02ef2e2e54e9f8309 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:48:43 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:48:43 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 09:48:43 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.175s CPU time.
Nov 24 09:48:43 compute-1 ceph-mon[80009]: pgmap v644: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:48:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:43.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094843 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:48:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:44.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:48:44 compute-1 sudo[231260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:48:44 compute-1 sudo[231260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:48:44 compute-1 sudo[231260]: pam_unix(sudo:session): session closed for user root
Nov 24 09:48:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:45.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:45 compute-1 ceph-mon[80009]: pgmap v645: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:48:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:48:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:48:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:46.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:48:46 compute-1 ceph-mon[80009]: pgmap v646: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:48:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:47.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094847 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:48:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:48.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:49 compute-1 ceph-mon[80009]: pgmap v647: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:48:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:48:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:49.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:50.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:51 compute-1 ceph-mon[80009]: pgmap v648: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:48:51 compute-1 podman[231289]: 2025-11-24 09:48:51.354539791 +0000 UTC m=+0.052858951 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:48:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:51.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:52.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:53 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 12.
Nov 24 09:48:53 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:48:53 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.175s CPU time.
Nov 24 09:48:53 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:48:53 compute-1 ceph-mon[80009]: pgmap v649: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:48:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:53.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:53 compute-1 podman[231360]: 2025-11-24 09:48:53.435691878 +0000 UTC m=+0.041422218 container create 207f0671cdd1019d343023bf2aea63c935b49d50bb7f8dda80f4c66fa247beae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:48:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54dcb572b6e09d91400a89483f6b7fa20b52682d724f0b868503f64177b1bf6b/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:48:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54dcb572b6e09d91400a89483f6b7fa20b52682d724f0b868503f64177b1bf6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:48:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54dcb572b6e09d91400a89483f6b7fa20b52682d724f0b868503f64177b1bf6b/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:48:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54dcb572b6e09d91400a89483f6b7fa20b52682d724f0b868503f64177b1bf6b/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:48:53 compute-1 podman[231360]: 2025-11-24 09:48:53.481053674 +0000 UTC m=+0.086784024 container init 207f0671cdd1019d343023bf2aea63c935b49d50bb7f8dda80f4c66fa247beae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:48:53 compute-1 podman[231360]: 2025-11-24 09:48:53.487271107 +0000 UTC m=+0.093001427 container start 207f0671cdd1019d343023bf2aea63c935b49d50bb7f8dda80f4c66fa247beae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:48:53 compute-1 bash[231360]: 207f0671cdd1019d343023bf2aea63c935b49d50bb7f8dda80f4c66fa247beae
Nov 24 09:48:53 compute-1 podman[231360]: 2025-11-24 09:48:53.415345099 +0000 UTC m=+0.021075429 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:48:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:48:53 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:48:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:48:53 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:48:53 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:48:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:48:53 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:48:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:48:53 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:48:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:48:53 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:48:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:48:53 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:48:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:48:53 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:48:53 compute-1 podman[231373]: 2025-11-24 09:48:53.554169302 +0000 UTC m=+0.088349853 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:48:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:48:53 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:48:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:54.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:48:55 compute-1 ceph-mon[80009]: pgmap v650: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:48:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:55.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:56.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:57.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:57 compute-1 ceph-mon[80009]: pgmap v651: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:48:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:58.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:58 compute-1 ceph-mon[80009]: pgmap v652: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:48:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:48:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:48:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:59.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:48:59 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Nov 24 09:48:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:48:59 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Nov 24 09:48:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:48:59 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:48:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:48:59 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:48:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:48:59 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:49:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:00.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:49:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:49:01 compute-1 ceph-mon[80009]: pgmap v653: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:49:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:49:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/3169303797' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:49:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/3169303797' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:49:01 compute-1 podman[231445]: 2025-11-24 09:49:01.306470012 +0000 UTC m=+0.049181391 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 24 09:49:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:01.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:01 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:49:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:01 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:49:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:01 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:49:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:02.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:03 compute-1 ceph-mon[80009]: pgmap v654: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Nov 24 09:49:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:03.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:49:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:04.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:49:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:49:04 compute-1 sudo[231467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:49:04 compute-1 sudo[231467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:49:04 compute-1 sudo[231467]: pam_unix(sudo:session): session closed for user root
Nov 24 09:49:05 compute-1 ceph-mon[80009]: pgmap v655: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Nov 24 09:49:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:05.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094905 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:49:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:06.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:07 compute-1 ceph-mon[80009]: pgmap v656: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Nov 24 09:49:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:07.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-000000000000001f:nfs.cephfs.0: -2
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06fc000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:07 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:07 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06f0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:08.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:08 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:08 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06d8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:49:09 compute-1 ceph-mon[80009]: pgmap v657: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Nov 24 09:49:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:09.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094909 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:49:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:09 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06d0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:09 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:09 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06dc000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:10.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:10 compute-1 nova_compute[230010]: 2025-11-24 09:49:10.660 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:49:10 compute-1 nova_compute[230010]: 2025-11-24 09:49:10.673 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:49:10 compute-1 nova_compute[230010]: 2025-11-24 09:49:10.673 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 09:49:10 compute-1 nova_compute[230010]: 2025-11-24 09:49:10.673 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 09:49:10 compute-1 nova_compute[230010]: 2025-11-24 09:49:10.688 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 09:49:10 compute-1 nova_compute[230010]: 2025-11-24 09:49:10.688 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:49:10 compute-1 nova_compute[230010]: 2025-11-24 09:49:10.689 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:49:10 compute-1 nova_compute[230010]: 2025-11-24 09:49:10.689 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:49:10 compute-1 nova_compute[230010]: 2025-11-24 09:49:10.689 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:49:10 compute-1 nova_compute[230010]: 2025-11-24 09:49:10.689 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:49:10 compute-1 nova_compute[230010]: 2025-11-24 09:49:10.689 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 09:49:10 compute-1 nova_compute[230010]: 2025-11-24 09:49:10.690 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:49:10 compute-1 nova_compute[230010]: 2025-11-24 09:49:10.708 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:49:10 compute-1 nova_compute[230010]: 2025-11-24 09:49:10.708 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:49:10 compute-1 nova_compute[230010]: 2025-11-24 09:49:10.708 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:49:10 compute-1 nova_compute[230010]: 2025-11-24 09:49:10.709 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:49:10 compute-1 nova_compute[230010]: 2025-11-24 09:49:10.709 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:49:10 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:10 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06f0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:11 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:49:11 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1645154969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:49:11 compute-1 nova_compute[230010]: 2025-11-24 09:49:11.138 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:49:11 compute-1 nova_compute[230010]: 2025-11-24 09:49:11.344 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:49:11 compute-1 nova_compute[230010]: 2025-11-24 09:49:11.345 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5229MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:49:11 compute-1 nova_compute[230010]: 2025-11-24 09:49:11.346 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:49:11 compute-1 nova_compute[230010]: 2025-11-24 09:49:11.346 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:49:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:11.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:11 compute-1 ceph-mon[80009]: pgmap v658: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:49:11 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1645154969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:49:11 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/4210752601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:49:11 compute-1 nova_compute[230010]: 2025-11-24 09:49:11.429 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:49:11 compute-1 nova_compute[230010]: 2025-11-24 09:49:11.430 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:49:11 compute-1 nova_compute[230010]: 2025-11-24 09:49:11.443 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:49:11 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:49:11 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1597135010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:49:11 compute-1 nova_compute[230010]: 2025-11-24 09:49:11.935 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:49:11 compute-1 nova_compute[230010]: 2025-11-24 09:49:11.941 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:49:11 compute-1 nova_compute[230010]: 2025-11-24 09:49:11.956 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:49:11 compute-1 nova_compute[230010]: 2025-11-24 09:49:11.957 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:49:11 compute-1 nova_compute[230010]: 2025-11-24 09:49:11.957 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:49:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:11 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06d80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:11 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:11 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06d00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:12 compute-1 nova_compute[230010]: 2025-11-24 09:49:12.033 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:49:12 compute-1 nova_compute[230010]: 2025-11-24 09:49:12.033 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:49:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:12.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:12 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2451919561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:49:12 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1597135010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:49:12 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3250163179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:49:12 compute-1 ceph-mon[80009]: pgmap v659: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:49:12 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3464303203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:49:12 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:12 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06dc001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:13.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:13 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06f0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:13 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:13 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06d80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:14.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:49:14 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:14 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06d00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:15 compute-1 ceph-mon[80009]: pgmap v660: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Nov 24 09:49:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:15.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:49:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:49:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:15 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06dc001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:15 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06f0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:16.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:49:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:16 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06d80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:17 compute-1 ceph-mon[80009]: pgmap v661: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Nov 24 09:49:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:17.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:17 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06d00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:17 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:17 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06dc001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:18.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:18 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06f0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:19 compute-1 ceph-mon[80009]: pgmap v662: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 597 B/s wr, 2 op/s
Nov 24 09:49:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:49:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:19.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:19 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231376]: 24/11/2025 09:49:19 : epoch 69242a05 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06d8002b10 fd 38 proxy ignored for local
Nov 24 09:49:19 compute-1 kernel: ganesha.nfsd[231509]: segfault at 50 ip 00007f07a71fd32e sp 00007f075affc210 error 4 in libntirpc.so.5.8[7f07a71e2000+2c000] likely on CPU 0 (core 0, socket 0)
Nov 24 09:49:19 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:49:20 compute-1 systemd[1]: Started Process Core Dump (PID 231560/UID 0).
Nov 24 09:49:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:49:20.049 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:49:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:49:20.051 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:49:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:49:20.051 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:49:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:20.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:20 compute-1 systemd-coredump[231561]: Process 231390 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 55:
                                                    #0  0x00007f07a71fd32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 24 09:49:21 compute-1 systemd[1]: systemd-coredump@12-231560-0.service: Deactivated successfully.
Nov 24 09:49:21 compute-1 systemd[1]: systemd-coredump@12-231560-0.service: Consumed 1.015s CPU time.
Nov 24 09:49:21 compute-1 podman[231566]: 2025-11-24 09:49:21.105560516 +0000 UTC m=+0.024718038 container died 207f0671cdd1019d343023bf2aea63c935b49d50bb7f8dda80f4c66fa247beae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:49:21 compute-1 systemd[1]: var-lib-containers-storage-overlay-54dcb572b6e09d91400a89483f6b7fa20b52682d724f0b868503f64177b1bf6b-merged.mount: Deactivated successfully.
Nov 24 09:49:21 compute-1 podman[231566]: 2025-11-24 09:49:21.136019935 +0000 UTC m=+0.055177437 container remove 207f0671cdd1019d343023bf2aea63c935b49d50bb7f8dda80f4c66fa247beae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 24 09:49:21 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:49:21 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 09:49:21 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.154s CPU time.
Nov 24 09:49:21 compute-1 ceph-mon[80009]: pgmap v663: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:49:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:21.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:22.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:22 compute-1 podman[231609]: 2025-11-24 09:49:22.308812301 +0000 UTC m=+0.052485932 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 09:49:23 compute-1 ceph-mon[80009]: pgmap v664: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:49:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:23.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:24.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:24 compute-1 sudo[231631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:49:24 compute-1 sudo[231631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:49:24 compute-1 sudo[231631]: pam_unix(sudo:session): session closed for user root
Nov 24 09:49:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:49:24 compute-1 sudo[231669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:49:24 compute-1 podman[231633]: 2025-11-24 09:49:24.343396993 +0000 UTC m=+0.082138711 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 24 09:49:24 compute-1 sudo[231669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:49:24 compute-1 sudo[231707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:49:24 compute-1 sudo[231707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:49:24 compute-1 sudo[231707]: pam_unix(sudo:session): session closed for user root
Nov 24 09:49:24 compute-1 sudo[231669]: pam_unix(sudo:session): session closed for user root
Nov 24 09:49:25 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:49:25 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:49:25 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:49:25 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:49:25 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:49:25 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:49:25 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:49:25 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:49:25 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:49:25 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:49:25 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:49:25 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:49:25 compute-1 ceph-mon[80009]: pgmap v665: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:49:25 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:49:25 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:49:25 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:49:25 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:49:25 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:49:25 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:49:25 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:49:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:25.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:25 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094925 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:49:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:26.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:27 compute-1 ceph-mon[80009]: pgmap v666: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:49:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:27.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:28.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:49:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:49:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:49:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:29.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:29 compute-1 ceph-mon[80009]: pgmap v667: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:49:29 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:49:29 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:49:29 compute-1 sudo[231765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:49:29 compute-1 sudo[231765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:49:29 compute-1 sudo[231765]: pam_unix(sudo:session): session closed for user root
Nov 24 09:49:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:30.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:49:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:49:30 compute-1 ceph-mon[80009]: pgmap v668: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:49:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:31.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:49:31 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 13.
Nov 24 09:49:31 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:49:31 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.154s CPU time.
Nov 24 09:49:31 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:49:31 compute-1 podman[231791]: 2025-11-24 09:49:31.560226857 +0000 UTC m=+0.051326052 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:49:31 compute-1 podman[231855]: 2025-11-24 09:49:31.685732603 +0000 UTC m=+0.036144760 container create 1da4f2ab964ee7c0bfd002d80d7b0b1988bc7d8f299c0f28278aaaeea008eea6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:49:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7965caac272a713787c8d26f8d5128ae9091a3b21b0f38c635c05f705b356082/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:49:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7965caac272a713787c8d26f8d5128ae9091a3b21b0f38c635c05f705b356082/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:49:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7965caac272a713787c8d26f8d5128ae9091a3b21b0f38c635c05f705b356082/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:49:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7965caac272a713787c8d26f8d5128ae9091a3b21b0f38c635c05f705b356082/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:49:31 compute-1 podman[231855]: 2025-11-24 09:49:31.730276468 +0000 UTC m=+0.080688645 container init 1da4f2ab964ee7c0bfd002d80d7b0b1988bc7d8f299c0f28278aaaeea008eea6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:49:31 compute-1 podman[231855]: 2025-11-24 09:49:31.738188392 +0000 UTC m=+0.088600549 container start 1da4f2ab964ee7c0bfd002d80d7b0b1988bc7d8f299c0f28278aaaeea008eea6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 09:49:31 compute-1 bash[231855]: 1da4f2ab964ee7c0bfd002d80d7b0b1988bc7d8f299c0f28278aaaeea008eea6
Nov 24 09:49:31 compute-1 podman[231855]: 2025-11-24 09:49:31.670545869 +0000 UTC m=+0.020958056 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:49:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:31 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:49:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:31 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:49:31 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:49:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:31 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:49:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:31 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:49:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:31 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:49:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:31 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:49:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:31 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:49:31 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:31 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:49:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:32.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:32 compute-1 ceph-mon[80009]: pgmap v669: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:49:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:33.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:34.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:49:35 compute-1 ceph-mon[80009]: pgmap v670: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:49:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:35.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:36.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:37 compute-1 ceph-mon[80009]: pgmap v671: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:49:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:37.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:37 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:49:37 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:37 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:49:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:38.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:39 compute-1 ceph-mon[80009]: pgmap v672: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:49:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:49:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:39.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:40.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:41 compute-1 ceph-mon[80009]: pgmap v673: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Nov 24 09:49:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:41.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:42.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:43 compute-1 ceph-mon[80009]: pgmap v674: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:49:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:43.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:49:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe414000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:43 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4000016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:44.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:49:44 compute-1 sudo[231935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:49:44 compute-1 sudo[231935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:49:44 compute-1 sudo[231935]: pam_unix(sudo:session): session closed for user root
Nov 24 09:49:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:44 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:45 compute-1 ceph-mon[80009]: pgmap v675: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:49:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:49:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:49:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:45.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094945 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:49:45 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:45 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:46 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f4000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:46.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:49:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:46 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4000016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:47.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:47 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:47 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:48 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:48.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:48 compute-1 ceph-mon[80009]: pgmap v676: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:49:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:48 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:49 compute-1 ceph-mon[80009]: pgmap v677: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:49:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:49:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:49.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:49 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:49 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4000016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:50 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:50.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:50 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4000016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:51 compute-1 rsyslogd[1005]: imjournal: 1384 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 24 09:49:51 compute-1 ceph-mon[80009]: pgmap v678: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:49:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:51.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:51 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:51 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:52 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:52 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:52.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:52 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:52 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094953 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:49:53 compute-1 podman[231964]: 2025-11-24 09:49:53.314216984 +0000 UTC m=+0.057951715 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Nov 24 09:49:53 compute-1 ceph-mon[80009]: pgmap v679: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:49:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:53.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:53 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:53 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4000016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:54 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:54 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:54.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:49:54 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:54 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:55 compute-1 podman[231985]: 2025-11-24 09:49:55.367757953 +0000 UTC m=+0.102758427 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 09:49:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:55.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:55 compute-1 ceph-mon[80009]: pgmap v680: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:49:55 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/394130552' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 24 09:49:55 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/2190235602' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 24 09:49:55 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:55 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f4002580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:56 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:56 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4000016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:56.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:56 compute-1 ceph-mon[80009]: from='client.15162 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 24 09:49:56 compute-1 ceph-mon[80009]: from='client.24541 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 24 09:49:56 compute-1 ceph-mon[80009]: from='client.24541 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Nov 24 09:49:56 compute-1 ceph-mon[80009]: pgmap v681: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:49:56 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:56 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:57.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:57 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:57 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:58 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:58 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:58.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:58 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:58 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4000016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:49:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:49:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:59.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:59 compute-1 ceph-mon[80009]: pgmap v682: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:49:59 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:59 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:00 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:00 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f00036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:00.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:50:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:50:00 compute-1 ceph-mon[80009]: overall HEALTH_OK
Nov 24 09:50:00 compute-1 ceph-mon[80009]: pgmap v683: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:50:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.610446) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977800610488, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2353, "num_deletes": 251, "total_data_size": 6357529, "memory_usage": 6430848, "flush_reason": "Manual Compaction"}
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977800630100, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 4163317, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20835, "largest_seqno": 23183, "table_properties": {"data_size": 4153694, "index_size": 6117, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19510, "raw_average_key_size": 20, "raw_value_size": 4134666, "raw_average_value_size": 4284, "num_data_blocks": 268, "num_entries": 965, "num_filter_entries": 965, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763977576, "oldest_key_time": 1763977576, "file_creation_time": 1763977800, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 19733 microseconds, and 9584 cpu microseconds.
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.630180) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 4163317 bytes OK
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.630209) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.631883) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.631913) EVENT_LOG_v1 {"time_micros": 1763977800631904, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.631940) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 6347095, prev total WAL file size 6347095, number of live WAL files 2.
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.634731) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(4065KB)], [39(12MB)]
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977800634779, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 17118378, "oldest_snapshot_seqno": -1}
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 5452 keys, 14940846 bytes, temperature: kUnknown
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977800715795, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 14940846, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14902070, "index_size": 24074, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 137561, "raw_average_key_size": 25, "raw_value_size": 14801259, "raw_average_value_size": 2714, "num_data_blocks": 993, "num_entries": 5452, "num_filter_entries": 5452, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763977800, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.716178) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 14940846 bytes
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.717620) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 211.0 rd, 184.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 12.4 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 5972, records dropped: 520 output_compression: NoCompression
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.717651) EVENT_LOG_v1 {"time_micros": 1763977800717637, "job": 22, "event": "compaction_finished", "compaction_time_micros": 81115, "compaction_time_cpu_micros": 55626, "output_level": 6, "num_output_files": 1, "total_output_size": 14940846, "num_input_records": 5972, "num_output_records": 5452, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977800719248, "job": 22, "event": "table_file_deletion", "file_number": 41}
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977800724096, "job": 22, "event": "table_file_deletion", "file_number": 39}
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.634628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.724179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.724191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.724194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.724197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:50:00 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.724198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:50:00 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:00 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:01.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/4055330357' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:50:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/4055330357' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:50:01 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:01 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4000016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:02 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:02 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4000016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:02.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:02 compute-1 podman[232017]: 2025-11-24 09:50:02.328173993 +0000 UTC m=+0.072234366 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 24 09:50:02 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:02 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:50:02 compute-1 ceph-mon[80009]: pgmap v684: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:50:02 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:02 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f00036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:03.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:03 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:03 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:04 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:04 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4000016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:04.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:50:04 compute-1 sudo[232037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:50:04 compute-1 sudo[232037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:50:04 compute-1 sudo[232037]: pam_unix(sudo:session): session closed for user root
Nov 24 09:50:04 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:04 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:05 compute-1 ceph-mon[80009]: pgmap v685: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:50:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:05 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:50:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:05 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:50:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:05.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:05 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f00036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:06 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:06 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:06.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:06 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:06 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4000016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:07 compute-1 ceph-mon[80009]: pgmap v686: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:50:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:07.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:08 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:07 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:08 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:08 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f00036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:08.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:08 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:08 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:50:08 compute-1 nova_compute[230010]: 2025-11-24 09:50:08.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:50:08 compute-1 nova_compute[230010]: 2025-11-24 09:50:08.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:50:08 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:08 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f40044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:09 compute-1 ceph-mon[80009]: pgmap v687: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:50:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:50:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:09.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:09 compute-1 nova_compute[230010]: 2025-11-24 09:50:09.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:50:09 compute-1 nova_compute[230010]: 2025-11-24 09:50:09.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:50:09 compute-1 nova_compute[230010]: 2025-11-24 09:50:09.766 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 09:50:10 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:10 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4000016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:10 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:10 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4000016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:10.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:10 compute-1 nova_compute[230010]: 2025-11-24 09:50:10.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:50:10 compute-1 nova_compute[230010]: 2025-11-24 09:50:10.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 09:50:10 compute-1 nova_compute[230010]: 2025-11-24 09:50:10.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 09:50:10 compute-1 nova_compute[230010]: 2025-11-24 09:50:10.776 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 09:50:10 compute-1 nova_compute[230010]: 2025-11-24 09:50:10.776 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:50:10 compute-1 nova_compute[230010]: 2025-11-24 09:50:10.776 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:50:10 compute-1 nova_compute[230010]: 2025-11-24 09:50:10.795 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:50:10 compute-1 nova_compute[230010]: 2025-11-24 09:50:10.795 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:50:10 compute-1 nova_compute[230010]: 2025-11-24 09:50:10.795 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:50:10 compute-1 nova_compute[230010]: 2025-11-24 09:50:10.795 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:50:10 compute-1 nova_compute[230010]: 2025-11-24 09:50:10.796 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:50:10 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:10 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f00036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:11 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:50:11 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3497774359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:50:11 compute-1 nova_compute[230010]: 2025-11-24 09:50:11.254 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:50:11 compute-1 ceph-mon[80009]: pgmap v688: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:50:11 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1883317646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:50:11 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3497774359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:50:11 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/695719103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:50:11 compute-1 nova_compute[230010]: 2025-11-24 09:50:11.402 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:50:11 compute-1 nova_compute[230010]: 2025-11-24 09:50:11.403 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5248MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:50:11 compute-1 nova_compute[230010]: 2025-11-24 09:50:11.404 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:50:11 compute-1 nova_compute[230010]: 2025-11-24 09:50:11.404 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:50:11 compute-1 nova_compute[230010]: 2025-11-24 09:50:11.469 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:50:11 compute-1 nova_compute[230010]: 2025-11-24 09:50:11.470 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:50:11 compute-1 nova_compute[230010]: 2025-11-24 09:50:11.485 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:50:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:11.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:11 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:50:11 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2769118241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:50:11 compute-1 nova_compute[230010]: 2025-11-24 09:50:11.912 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:50:11 compute-1 nova_compute[230010]: 2025-11-24 09:50:11.917 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:50:11 compute-1 nova_compute[230010]: 2025-11-24 09:50:11.941 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:50:11 compute-1 nova_compute[230010]: 2025-11-24 09:50:11.943 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:50:11 compute-1 nova_compute[230010]: 2025-11-24 09:50:11.943 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:50:12 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:12 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4080014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:12 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:12 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:12.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:12 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2769118241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:50:12 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3495070877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:50:12 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2015223078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:50:12 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:12 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4000016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:12 compute-1 nova_compute[230010]: 2025-11-24 09:50:12.938 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:50:12 compute-1 nova_compute[230010]: 2025-11-24 09:50:12.939 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:50:13 compute-1 ceph-mon[80009]: pgmap v689: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:50:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:13.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:14 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:14 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f00036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:14 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:14 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe408001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:14.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:50:14 compute-1 ceph-mon[80009]: pgmap v690: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:50:14 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:14 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:15 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/095015 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:50:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:50:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:50:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:50:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:50:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:15.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:50:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:16 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe400003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:16 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f00036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:16.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:16 compute-1 ceph-mon[80009]: pgmap v691: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:50:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:16 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:17 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/2639730056' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 24 09:50:17 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/2923448658' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 24 09:50:17 compute-1 ceph-mon[80009]: from='client.15207 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 24 09:50:17 compute-1 ceph-mon[80009]: from='client.15210 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 24 09:50:17 compute-1 ceph-mon[80009]: from='client.15210 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Nov 24 09:50:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:17.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:18 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe408001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:18 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe400003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:18.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:18 compute-1 ceph-mon[80009]: pgmap v692: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:50:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:18 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:50:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:19.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:20 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:20 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:20 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:20 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe408001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:50:20.051 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:50:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:50:20.051 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:50:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:50:20.051 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:50:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:20.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:20 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:20 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe400003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:21 compute-1 ceph-mon[80009]: pgmap v693: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:50:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:21.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:22 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:22 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f0003720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:22 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:22 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:22.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:22 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:22 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe408003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:23 compute-1 ceph-mon[80009]: pgmap v694: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:50:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:23.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:24 compute-1 kernel: ganesha.nfsd[231924]: segfault at 50 ip 00007fe4c0ab232e sp 00007fe489ffa210 error 4 in libntirpc.so.5.8[7fe4c0a97000+2c000] likely on CPU 0 (core 0, socket 0)
Nov 24 09:50:24 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:50:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:24 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe400003780 fd 38 proxy ignored for local
Nov 24 09:50:24 compute-1 systemd[1]: Started Process Core Dump (PID 232117/UID 0).
Nov 24 09:50:24 compute-1 podman[232118]: 2025-11-24 09:50:24.105592889 +0000 UTC m=+0.060019101 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 24 09:50:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:24.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:50:24 compute-1 sudo[232140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:50:24 compute-1 sudo[232140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:50:24 compute-1 sudo[232140]: pam_unix(sudo:session): session closed for user root
Nov 24 09:50:25 compute-1 systemd-coredump[232119]: Process 231875 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 45:
                                                    #0  0x00007fe4c0ab232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 24 09:50:25 compute-1 systemd[1]: systemd-coredump@13-232117-0.service: Deactivated successfully.
Nov 24 09:50:25 compute-1 systemd[1]: systemd-coredump@13-232117-0.service: Consumed 1.179s CPU time.
Nov 24 09:50:25 compute-1 podman[232169]: 2025-11-24 09:50:25.315969601 +0000 UTC m=+0.025578297 container died 1da4f2ab964ee7c0bfd002d80d7b0b1988bc7d8f299c0f28278aaaeea008eea6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:50:25 compute-1 systemd[1]: var-lib-containers-storage-overlay-7965caac272a713787c8d26f8d5128ae9091a3b21b0f38c635c05f705b356082-merged.mount: Deactivated successfully.
Nov 24 09:50:25 compute-1 podman[232169]: 2025-11-24 09:50:25.351020419 +0000 UTC m=+0.060629085 container remove 1da4f2ab964ee7c0bfd002d80d7b0b1988bc7d8f299c0f28278aaaeea008eea6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:50:25 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:50:25 compute-1 ceph-mon[80009]: pgmap v695: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:50:25 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 09:50:25 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.396s CPU time.
Nov 24 09:50:25 compute-1 podman[232210]: 2025-11-24 09:50:25.598282783 +0000 UTC m=+0.081127647 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 24 09:50:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:25.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:26.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:27 compute-1 ceph-mon[80009]: pgmap v696: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:50:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:27.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:28.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:28 compute-1 ceph-mon[80009]: pgmap v697: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:50:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:50:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:50:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:29.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:50:29 compute-1 sudo[232239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:50:29 compute-1 sudo[232239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:50:29 compute-1 sudo[232239]: pam_unix(sudo:session): session closed for user root
Nov 24 09:50:29 compute-1 sudo[232264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 24 09:50:29 compute-1 sudo[232264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:50:30 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/095030 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:50:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:30.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 24 09:50:30 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:50:30 compute-1 podman[232358]: 2025-11-24 09:50:30.26845105 +0000 UTC m=+0.089274466 container exec fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 24 09:50:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:50:30 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:50:30 compute-1 podman[232358]: 2025-11-24 09:50:30.38277973 +0000 UTC m=+0.203603136 container exec_died fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 24 09:50:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:50:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:50:31 compute-1 podman[232495]: 2025-11-24 09:50:31.048767415 +0000 UTC m=+0.129134402 container exec 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:50:31 compute-1 podman[232520]: 2025-11-24 09:50:31.171810668 +0000 UTC m=+0.108026776 container exec_died 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:50:31 compute-1 podman[232495]: 2025-11-24 09:50:31.188722782 +0000 UTC m=+0.269089799 container exec_died 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:50:31 compute-1 ceph-mon[80009]: pgmap v698: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:50:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:50:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:31.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:31 compute-1 podman[232614]: 2025-11-24 09:50:31.712961566 +0000 UTC m=+0.054934626 container exec 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 09:50:31 compute-1 podman[232614]: 2025-11-24 09:50:31.740726536 +0000 UTC m=+0.082699596 container exec_died 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 09:50:31 compute-1 podman[232682]: 2025-11-24 09:50:31.961277996 +0000 UTC m=+0.059389116 container exec b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.buildah.version=1.28.2, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1793, version=2.2.4, name=keepalived, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git)
Nov 24 09:50:32 compute-1 podman[232682]: 2025-11-24 09:50:32.017875711 +0000 UTC m=+0.115986831 container exec_died b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, release=1793, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.openshift.tags=Ceph keepalived, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, name=keepalived, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Nov 24 09:50:32 compute-1 sudo[232264]: pam_unix(sudo:session): session closed for user root
Nov 24 09:50:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:50:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:50:32 compute-1 sudo[232715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:50:32 compute-1 sudo[232715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:50:32 compute-1 sudo[232715]: pam_unix(sudo:session): session closed for user root
Nov 24 09:50:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:32.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:32 compute-1 sudo[232740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:50:32 compute-1 sudo[232740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:50:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:50:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:50:32 compute-1 sudo[232740]: pam_unix(sudo:session): session closed for user root
Nov 24 09:50:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 24 09:50:32 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:50:33 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 24 09:50:33 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:50:33 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:33 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:33 compute-1 ceph-mon[80009]: pgmap v699: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:50:33 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:33 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:33 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:50:33 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:50:33 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:50:33 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:50:33 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:50:33 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:50:33 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:50:33 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:50:33 compute-1 podman[232796]: 2025-11-24 09:50:33.334918316 +0000 UTC m=+0.067293368 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 24 09:50:33 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:50:33 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:50:33 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:50:33 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:50:33 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:50:33 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:50:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:33.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:34.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:34 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:50:34 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:50:34 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:50:34 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:50:34 compute-1 ceph-mon[80009]: pgmap v700: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 185 B/s rd, 0 op/s
Nov 24 09:50:34 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:34 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:34 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:50:34 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:50:34 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:50:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:50:35 compute-1 ceph-mon[80009]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Nov 24 09:50:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:35.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:35 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 14.
Nov 24 09:50:35 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:50:35 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.396s CPU time.
Nov 24 09:50:35 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:50:35 compute-1 podman[232866]: 2025-11-24 09:50:35.947349724 +0000 UTC m=+0.041367493 container create 1503b1868ffcd35020bab5465b223d2cec50eac84dbca532a4d028d35a74126e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:50:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9523fca06ca10bbbcc0c351e718a2551cbafcef26024bd43b47a36575b91d90d/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:50:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9523fca06ca10bbbcc0c351e718a2551cbafcef26024bd43b47a36575b91d90d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:50:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9523fca06ca10bbbcc0c351e718a2551cbafcef26024bd43b47a36575b91d90d/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:50:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9523fca06ca10bbbcc0c351e718a2551cbafcef26024bd43b47a36575b91d90d/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:50:36 compute-1 podman[232866]: 2025-11-24 09:50:36.00230481 +0000 UTC m=+0.096322609 container init 1503b1868ffcd35020bab5465b223d2cec50eac84dbca532a4d028d35a74126e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:50:36 compute-1 podman[232866]: 2025-11-24 09:50:36.010228663 +0000 UTC m=+0.104246432 container start 1503b1868ffcd35020bab5465b223d2cec50eac84dbca532a4d028d35a74126e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:50:36 compute-1 bash[232866]: 1503b1868ffcd35020bab5465b223d2cec50eac84dbca532a4d028d35a74126e
Nov 24 09:50:36 compute-1 podman[232866]: 2025-11-24 09:50:35.931442815 +0000 UTC m=+0.025460604 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:50:36 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:50:36 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:36 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:50:36 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:36 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:50:36 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:36 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:50:36 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:36 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:50:36 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:36 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:50:36 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:36 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:50:36 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:36 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:50:36 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:36 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:50:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:36.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:36 compute-1 ceph-mon[80009]: pgmap v701: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 186 B/s rd, 0 op/s
Nov 24 09:50:37 compute-1 ceph-mon[80009]: pgmap v702: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 651 B/s rd, 93 B/s wr, 0 op/s
Nov 24 09:50:37 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:50:37 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:50:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:37.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:37 compute-1 sudo[232923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:50:37 compute-1 sudo[232923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:50:37 compute-1 sudo[232923]: pam_unix(sudo:session): session closed for user root
Nov 24 09:50:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:38.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:38 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:38 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:50:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:39.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:39 compute-1 ceph-mon[80009]: pgmap v703: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 651 B/s rd, 93 B/s wr, 0 op/s
Nov 24 09:50:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:40.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:41.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:42 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:50:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:42 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:50:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:42.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:42 compute-1 ceph-mon[80009]: pgmap v704: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 651 B/s rd, 93 B/s wr, 0 op/s
Nov 24 09:50:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:43.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:44.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:50:44 compute-1 ceph-mon[80009]: pgmap v705: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 651 B/s wr, 1 op/s
Nov 24 09:50:44 compute-1 sudo[232952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:50:44 compute-1 sudo[232952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:50:44 compute-1 sudo[232952]: pam_unix(sudo:session): session closed for user root
Nov 24 09:50:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Nov 24 09:50:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:50:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:50:45 compute-1 ceph-mon[80009]: pgmap v706: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:50:45 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:45 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:50:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:45.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:46.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:47.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:48.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:48 compute-1 ceph-mon[80009]: pgmap v707: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:50:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89dc000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:50:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:49.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:50 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc000da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:50 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:50.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:50 compute-1 ceph-mon[80009]: pgmap v708: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:50:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:50 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:51.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:52 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/095052 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:50:52 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:52 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c80014c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:52 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:52 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc001ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:52.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:52 compute-1 ceph-mon[80009]: pgmap v709: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:50:52 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:52 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c80014c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:53.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:54 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:54 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:54 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:54 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:54.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:54 compute-1 podman[232997]: 2025-11-24 09:50:54.348442145 +0000 UTC m=+0.086589741 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, container_name=multipathd)
Nov 24 09:50:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:50:54 compute-1 ceph-mon[80009]: pgmap v710: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:50:54 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:54 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc001ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:55.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:56 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:56 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8002460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:56 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:56 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:56.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:56 compute-1 podman[233019]: 2025-11-24 09:50:56.326642026 +0000 UTC m=+0.071054411 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 09:50:56 compute-1 ceph-mon[80009]: pgmap v711: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:50:56 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:56 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:57 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:50:57.358 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:50:57 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:50:57.359 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 09:50:57 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:50:57.359 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:50:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:57.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:58 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:58 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc0027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:58 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:58 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8002460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:58.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:58 compute-1 ceph-mon[80009]: pgmap v712: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:50:58 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:58 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc0027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:50:59 compute-1 ceph-mon[80009]: pgmap v713: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:50:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:50:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:59.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:00 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:00 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:00 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:00 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:00.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:51:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:51:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:51:00 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:00 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8002460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 09:51:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1960026280' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:51:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 09:51:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1960026280' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:51:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:51:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:01.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:51:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/1960026280' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:51:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/1960026280' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:51:01 compute-1 ceph-mon[80009]: pgmap v714: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:51:02 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:02 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc0027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:02 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:02 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:02.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:02 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:02 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac002160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:03.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:04 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:04 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:04 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:04 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc0027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:51:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:04.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:51:04 compute-1 ceph-mon[80009]: pgmap v715: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:51:04 compute-1 podman[233049]: 2025-11-24 09:51:04.339648513 +0000 UTC m=+0.074296430 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 09:51:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:51:04 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:04 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:04 compute-1 sudo[233068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:51:04 compute-1 sudo[233068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:51:05 compute-1 sudo[233068]: pam_unix(sudo:session): session closed for user root
Nov 24 09:51:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:05.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:06 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:06 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:06 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:06 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:06.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:06 compute-1 ceph-mon[80009]: pgmap v716: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:06 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:06 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc0027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:07.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:08 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:08 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc0027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:08 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:08 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc0027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:08.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:08 compute-1 ceph-mon[80009]: pgmap v717: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:51:08 compute-1 nova_compute[230010]: 2025-11-24 09:51:08.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:51:08 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:08 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:51:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:09.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:10 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:10 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:10 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:10 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc0027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:51:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:10.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:51:10 compute-1 ceph-mon[80009]: pgmap v718: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:10 compute-1 nova_compute[230010]: 2025-11-24 09:51:10.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:51:10 compute-1 nova_compute[230010]: 2025-11-24 09:51:10.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 09:51:10 compute-1 nova_compute[230010]: 2025-11-24 09:51:10.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 09:51:10 compute-1 nova_compute[230010]: 2025-11-24 09:51:10.776 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 09:51:10 compute-1 nova_compute[230010]: 2025-11-24 09:51:10.777 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:51:10 compute-1 nova_compute[230010]: 2025-11-24 09:51:10.777 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:51:10 compute-1 nova_compute[230010]: 2025-11-24 09:51:10.777 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:51:10 compute-1 nova_compute[230010]: 2025-11-24 09:51:10.777 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 09:51:10 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:10 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:11 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/233033597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:51:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:11.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:11 compute-1 nova_compute[230010]: 2025-11-24 09:51:11.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:51:11 compute-1 nova_compute[230010]: 2025-11-24 09:51:11.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:51:11 compute-1 nova_compute[230010]: 2025-11-24 09:51:11.789 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:51:11 compute-1 nova_compute[230010]: 2025-11-24 09:51:11.789 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:51:11 compute-1 nova_compute[230010]: 2025-11-24 09:51:11.789 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:51:11 compute-1 nova_compute[230010]: 2025-11-24 09:51:11.789 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:51:11 compute-1 nova_compute[230010]: 2025-11-24 09:51:11.789 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:51:11 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:51:11 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2281961602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:51:12 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:12 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:12 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:12 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:51:12 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/423930642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:51:12 compute-1 nova_compute[230010]: 2025-11-24 09:51:12.225 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:51:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:51:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:12.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:51:12 compute-1 nova_compute[230010]: 2025-11-24 09:51:12.368 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:51:12 compute-1 nova_compute[230010]: 2025-11-24 09:51:12.369 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5274MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:51:12 compute-1 nova_compute[230010]: 2025-11-24 09:51:12.369 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:51:12 compute-1 nova_compute[230010]: 2025-11-24 09:51:12.369 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:51:12 compute-1 ceph-mon[80009]: pgmap v719: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:12 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2281961602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:51:12 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/423930642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:51:12 compute-1 nova_compute[230010]: 2025-11-24 09:51:12.425 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:51:12 compute-1 nova_compute[230010]: 2025-11-24 09:51:12.426 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:51:12 compute-1 nova_compute[230010]: 2025-11-24 09:51:12.457 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:51:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:51:12 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4230280917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:51:12 compute-1 nova_compute[230010]: 2025-11-24 09:51:12.897 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:51:12 compute-1 nova_compute[230010]: 2025-11-24 09:51:12.902 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:51:12 compute-1 nova_compute[230010]: 2025-11-24 09:51:12.914 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:51:12 compute-1 nova_compute[230010]: 2025-11-24 09:51:12.915 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:51:12 compute-1 nova_compute[230010]: 2025-11-24 09:51:12.916 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:51:12 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:12 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc0027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:13 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/4230280917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:51:13 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2994754606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:51:13 compute-1 ceph-mon[80009]: pgmap v720: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:51:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:13.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:13 compute-1 nova_compute[230010]: 2025-11-24 09:51:13.910 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:51:13 compute-1 nova_compute[230010]: 2025-11-24 09:51:13.910 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:51:13 compute-1 nova_compute[230010]: 2025-11-24 09:51:13.928 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:51:14 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:14 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:14 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:14 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:14.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:51:14 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1692086803' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:51:14 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:14 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:51:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:51:15 compute-1 ceph-mon[80009]: pgmap v721: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:51:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:15.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:16 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc0027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:16 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:16.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:16 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:16 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:17.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:18 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:18 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc0027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:18.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:18 compute-1 ceph-mon[80009]: pgmap v722: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:51:18 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:18 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:51:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:51:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:19.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:51:20 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:20 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:51:20.051 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:51:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:51:20.052 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:51:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:51:20.052 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:51:20 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:20 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:20.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:20 compute-1 ceph-mon[80009]: pgmap v723: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:20 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:20 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d4001110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:21.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:22 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:22 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:22 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:22 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:51:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:22.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:51:22 compute-1 ceph-mon[80009]: pgmap v724: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:22 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:22 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:23.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:24 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d4001110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:24 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:24.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:51:24 compute-1 ceph-mon[80009]: pgmap v725: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:51:24 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:24 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:25 compute-1 sudo[233149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:51:25 compute-1 sudo[233149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:51:25 compute-1 sudo[233149]: pam_unix(sudo:session): session closed for user root
Nov 24 09:51:25 compute-1 podman[233173]: 2025-11-24 09:51:25.141798692 +0000 UTC m=+0.050523358 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:51:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:25.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:26 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:26 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:26 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:26 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d40022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:26.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:26 compute-1 ceph-mon[80009]: pgmap v726: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:26 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:26 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:27 compute-1 podman[233196]: 2025-11-24 09:51:27.337009987 +0000 UTC m=+0.077624162 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 09:51:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:27.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:28 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:28 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:28 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:28 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:28.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:28 compute-1 ceph-mon[80009]: pgmap v727: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:51:28 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:28 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d40022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:51:29 compute-1 ceph-mon[80009]: pgmap v728: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:29.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:30 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:30 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:30 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:30 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:51:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:30.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:51:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:51:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:51:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:51:30 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:30 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:31 compute-1 ceph-mon[80009]: pgmap v729: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:31.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:32 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d40022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:32 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:32.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:32 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:32 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:33.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:34 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:34 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:34 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:34 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:34.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:34 compute-1 ceph-mon[80009]: pgmap v730: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:51:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:51:34 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:34 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:35 compute-1 podman[233228]: 2025-11-24 09:51:35.327323468 +0000 UTC m=+0.062586133 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 24 09:51:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:51:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:35.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:51:36 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:36 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:36 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:36 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:36.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:36 compute-1 ceph-mon[80009]: pgmap v731: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:36 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:36 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:37.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:37 compute-1 sudo[233248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:51:37 compute-1 sudo[233248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:51:37 compute-1 sudo[233248]: pam_unix(sudo:session): session closed for user root
Nov 24 09:51:38 compute-1 sudo[233274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:51:38 compute-1 sudo[233274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:51:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:38 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:38 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8004570 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:38.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:38 compute-1 ceph-mon[80009]: pgmap v732: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:51:38 compute-1 sudo[233274]: pam_unix(sudo:session): session closed for user root
Nov 24 09:51:38 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:51:38 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:51:38 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:51:38 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:51:38 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:51:38 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:51:38 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:51:38 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:51:38 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:51:38 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:51:38 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:51:38 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:51:38 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:38 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:51:39 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:51:39 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:51:39 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:51:39 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:51:39 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:51:39 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:51:39 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:51:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:39.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:40 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:40 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:40 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:40 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:40.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:40 compute-1 ceph-mon[80009]: pgmap v733: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 269 B/s rd, 0 op/s
Nov 24 09:51:40 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:40 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8004570 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:41 compute-1 ceph-mon[80009]: pgmap v734: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 269 B/s rd, 0 op/s
Nov 24 09:51:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:41.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:42 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8004570 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:42 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d4004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:42.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:42 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:51:42 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:51:42 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:42 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:42 compute-1 sudo[233332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:51:42 compute-1 sudo[233332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:51:42 compute-1 sudo[233332]: pam_unix(sudo:session): session closed for user root
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.530884) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977903531212, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1340, "num_deletes": 252, "total_data_size": 3199155, "memory_usage": 3247032, "flush_reason": "Manual Compaction"}
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977903542001, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 1342962, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23188, "largest_seqno": 24523, "table_properties": {"data_size": 1338313, "index_size": 2045, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12168, "raw_average_key_size": 20, "raw_value_size": 1328214, "raw_average_value_size": 2251, "num_data_blocks": 88, "num_entries": 590, "num_filter_entries": 590, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763977801, "oldest_key_time": 1763977801, "file_creation_time": 1763977903, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 11152 microseconds, and 5665 cpu microseconds.
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.542046) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 1342962 bytes OK
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.542065) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.543512) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.543532) EVENT_LOG_v1 {"time_micros": 1763977903543526, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.543551) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 3192751, prev total WAL file size 3192751, number of live WAL files 2.
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.544601) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353034' seq:72057594037927935, type:22 .. '6D67727374617400373537' seq:0, type:0; will stop at (end)
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(1311KB)], [42(14MB)]
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977903544647, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 16283808, "oldest_snapshot_seqno": -1}
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 5565 keys, 12889673 bytes, temperature: kUnknown
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977903616703, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 12889673, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12853364, "index_size": 21287, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13957, "raw_key_size": 140281, "raw_average_key_size": 25, "raw_value_size": 12753791, "raw_average_value_size": 2291, "num_data_blocks": 870, "num_entries": 5565, "num_filter_entries": 5565, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763977903, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.616974) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 12889673 bytes
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.618170) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 225.7 rd, 178.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 14.2 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(21.7) write-amplify(9.6) OK, records in: 6042, records dropped: 477 output_compression: NoCompression
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.618201) EVENT_LOG_v1 {"time_micros": 1763977903618188, "job": 24, "event": "compaction_finished", "compaction_time_micros": 72141, "compaction_time_cpu_micros": 25000, "output_level": 6, "num_output_files": 1, "total_output_size": 12889673, "num_input_records": 6042, "num_output_records": 5565, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977903618588, "job": 24, "event": "table_file_deletion", "file_number": 44}
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977903621209, "job": 24, "event": "table_file_deletion", "file_number": 42}
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.544495) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.621498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.621504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.621506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.621508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:51:43 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.621510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:51:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:51:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:43.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:51:43 compute-1 ceph-mon[80009]: pgmap v735: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 359 B/s rd, 0 op/s
Nov 24 09:51:43 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:51:43 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:51:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:44 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:44 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8004570 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:44.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:51:44 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:44 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8004570 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:45 compute-1 sudo[233358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:51:45 compute-1 sudo[233358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:51:45 compute-1 sudo[233358]: pam_unix(sudo:session): session closed for user root
Nov 24 09:51:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:51:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:51:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:45.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:45 compute-1 ceph-mon[80009]: pgmap v736: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 269 B/s rd, 0 op/s
Nov 24 09:51:45 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:51:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:46 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:46 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:46.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:46 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:46 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:47.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:47 compute-1 ceph-mon[80009]: pgmap v737: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 449 B/s rd, 0 op/s
Nov 24 09:51:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c40008d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:48.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:48 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:51:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:49.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:49 compute-1 ceph-mon[80009]: pgmap v738: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 269 B/s rd, 0 op/s
Nov 24 09:51:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:50 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:50 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:51:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:50.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:51:50 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:50 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c4001a40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:51.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:52 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:52 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8004570 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:52 compute-1 ceph-mon[80009]: pgmap v739: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:52 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:52 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d4004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:52.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:52 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:52 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:53.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:54 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:54 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:54 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:54 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8004570 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:54 compute-1 ceph-mon[80009]: pgmap v740: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:51:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:54.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:51:54 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:54 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d4004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:55 compute-1 podman[233390]: 2025-11-24 09:51:55.315152789 +0000 UTC m=+0.055182661 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd)
Nov 24 09:51:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:55.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:56 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:56 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:56 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:56 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c4002360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:56 compute-1 ceph-mon[80009]: pgmap v741: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:51:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:56.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:51:56 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:56 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8004570 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:57.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:58 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:58 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d4004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:58 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:58 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:58 compute-1 ceph-mon[80009]: pgmap v742: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:51:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:51:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:58.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:51:58 compute-1 podman[233413]: 2025-11-24 09:51:58.381354368 +0000 UTC m=+0.122864000 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 24 09:51:58 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:58 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:51:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:51:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:51:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:59.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:52:00 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:52:00 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8004570 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:00 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:52:00 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d4004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:00 compute-1 ceph-mon[80009]: pgmap v743: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:52:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:00.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:52:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:52:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 09:52:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/856740024' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:52:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 09:52:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/856740024' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:52:00 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:52:00 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:01 compute-1 anacron[29933]: Job `cron.monthly' started
Nov 24 09:52:01 compute-1 anacron[29933]: Job `cron.monthly' terminated
Nov 24 09:52:01 compute-1 anacron[29933]: Normal exit (3 jobs run)
Nov 24 09:52:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:52:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/856740024' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:52:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/856740024' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:52:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:52:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:01.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:52:02 compute-1 kernel: ganesha.nfsd[232992]: segfault at 50 ip 00007f8a883f532e sp 00007f8a48ff8210 error 4 in libntirpc.so.5.8[7f8a883da000+2c000] likely on CPU 7 (core 0, socket 7)
Nov 24 09:52:02 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:52:02 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:52:02 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy ignored for local
Nov 24 09:52:02 compute-1 systemd[1]: Started Process Core Dump (PID 233443/UID 0).
Nov 24 09:52:02 compute-1 ceph-mon[80009]: pgmap v744: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:52:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:02.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:03 compute-1 systemd-coredump[233444]: Process 232885 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 53:
                                                    #0  0x00007f8a883f532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 24 09:52:03 compute-1 systemd[1]: systemd-coredump@14-233443-0.service: Deactivated successfully.
Nov 24 09:52:03 compute-1 systemd[1]: systemd-coredump@14-233443-0.service: Consumed 1.154s CPU time.
Nov 24 09:52:03 compute-1 podman[233449]: 2025-11-24 09:52:03.369130361 +0000 UTC m=+0.036435723 container died 1503b1868ffcd35020bab5465b223d2cec50eac84dbca532a4d028d35a74126e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 24 09:52:03 compute-1 systemd[1]: var-lib-containers-storage-overlay-9523fca06ca10bbbcc0c351e718a2551cbafcef26024bd43b47a36575b91d90d-merged.mount: Deactivated successfully.
Nov 24 09:52:03 compute-1 podman[233449]: 2025-11-24 09:52:03.428252628 +0000 UTC m=+0.095557940 container remove 1503b1868ffcd35020bab5465b223d2cec50eac84dbca532a4d028d35a74126e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:52:03 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:52:03 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 09:52:03 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.513s CPU time.
Nov 24 09:52:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:52:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:03.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:52:04 compute-1 ceph-mon[80009]: pgmap v745: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:52:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:52:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:04.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:52:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:52:05 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/095205 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:52:05 compute-1 sudo[233492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:52:05 compute-1 sudo[233492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:52:05 compute-1 sudo[233492]: pam_unix(sudo:session): session closed for user root
Nov 24 09:52:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:05.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:06 compute-1 ceph-mon[80009]: pgmap v746: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:52:06 compute-1 podman[233518]: 2025-11-24 09:52:06.319587657 +0000 UTC m=+0.053238965 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 24 09:52:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:06.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:52:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:07.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:52:08 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/095208 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:52:08 compute-1 ceph-mon[80009]: pgmap v747: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:52:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:08.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:08 compute-1 nova_compute[230010]: 2025-11-24 09:52:08.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:52:08 compute-1 nova_compute[230010]: 2025-11-24 09:52:08.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 24 09:52:08 compute-1 nova_compute[230010]: 2025-11-24 09:52:08.786 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 24 09:52:08 compute-1 nova_compute[230010]: 2025-11-24 09:52:08.787 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:52:08 compute-1 nova_compute[230010]: 2025-11-24 09:52:08.787 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 24 09:52:08 compute-1 nova_compute[230010]: 2025-11-24 09:52:08.797 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:52:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:52:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:09.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:10 compute-1 ceph-mon[80009]: pgmap v748: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:52:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:10.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:10 compute-1 nova_compute[230010]: 2025-11-24 09:52:10.868 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:52:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:11.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:11 compute-1 nova_compute[230010]: 2025-11-24 09:52:11.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:52:11 compute-1 nova_compute[230010]: 2025-11-24 09:52:11.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:52:11 compute-1 nova_compute[230010]: 2025-11-24 09:52:11.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 09:52:11 compute-1 nova_compute[230010]: 2025-11-24 09:52:11.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:52:11 compute-1 nova_compute[230010]: 2025-11-24 09:52:11.804 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:52:11 compute-1 nova_compute[230010]: 2025-11-24 09:52:11.805 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:52:11 compute-1 nova_compute[230010]: 2025-11-24 09:52:11.805 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:52:11 compute-1 nova_compute[230010]: 2025-11-24 09:52:11.805 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:52:11 compute-1 nova_compute[230010]: 2025-11-24 09:52:11.805 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:52:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:52:12 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4225259137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:52:12 compute-1 nova_compute[230010]: 2025-11-24 09:52:12.256 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:52:12 compute-1 ceph-mon[80009]: pgmap v749: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:52:12 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/4225259137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:52:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:52:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:12.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:52:12 compute-1 nova_compute[230010]: 2025-11-24 09:52:12.395 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:52:12 compute-1 nova_compute[230010]: 2025-11-24 09:52:12.397 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5284MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:52:12 compute-1 nova_compute[230010]: 2025-11-24 09:52:12.397 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:52:12 compute-1 nova_compute[230010]: 2025-11-24 09:52:12.398 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:52:12 compute-1 nova_compute[230010]: 2025-11-24 09:52:12.586 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:52:12 compute-1 nova_compute[230010]: 2025-11-24 09:52:12.586 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:52:12 compute-1 nova_compute[230010]: 2025-11-24 09:52:12.645 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing inventories for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 09:52:12 compute-1 nova_compute[230010]: 2025-11-24 09:52:12.728 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating ProviderTree inventory for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 09:52:12 compute-1 nova_compute[230010]: 2025-11-24 09:52:12.728 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating inventory in ProviderTree for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 09:52:12 compute-1 nova_compute[230010]: 2025-11-24 09:52:12.746 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing aggregate associations for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 09:52:12 compute-1 nova_compute[230010]: 2025-11-24 09:52:12.767 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing trait associations for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e, traits: COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_F16C,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI2,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 09:52:12 compute-1 nova_compute[230010]: 2025-11-24 09:52:12.782 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:52:13 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:52:13 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/405630170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:52:13 compute-1 nova_compute[230010]: 2025-11-24 09:52:13.190 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:52:13 compute-1 nova_compute[230010]: 2025-11-24 09:52:13.195 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:52:13 compute-1 nova_compute[230010]: 2025-11-24 09:52:13.209 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:52:13 compute-1 nova_compute[230010]: 2025-11-24 09:52:13.211 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:52:13 compute-1 nova_compute[230010]: 2025-11-24 09:52:13.211 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:52:13 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/405630170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:52:13 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 15.
Nov 24 09:52:13 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:52:13 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.513s CPU time.
Nov 24 09:52:13 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Start request repeated too quickly.
Nov 24 09:52:13 compute-1 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 09:52:13 compute-1 systemd[1]: Failed to start Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:52:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:13.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:14 compute-1 nova_compute[230010]: 2025-11-24 09:52:14.211 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:52:14 compute-1 nova_compute[230010]: 2025-11-24 09:52:14.211 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 09:52:14 compute-1 nova_compute[230010]: 2025-11-24 09:52:14.211 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 09:52:14 compute-1 nova_compute[230010]: 2025-11-24 09:52:14.230 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 09:52:14 compute-1 nova_compute[230010]: 2025-11-24 09:52:14.231 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:52:14 compute-1 nova_compute[230010]: 2025-11-24 09:52:14.231 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:52:14 compute-1 ceph-mon[80009]: pgmap v750: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:52:14 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1479336973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:52:14 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1366320252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:52:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:14.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:52:14 compute-1 nova_compute[230010]: 2025-11-24 09:52:14.779 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:52:15 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/4217903816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:52:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:52:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:52:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:15.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:15 compute-1 nova_compute[230010]: 2025-11-24 09:52:15.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:52:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:16.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:16 compute-1 ceph-mon[80009]: pgmap v751: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:52:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:52:16 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/647738787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:52:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:17.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:18.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:18 compute-1 ceph-mon[80009]: pgmap v752: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.544311) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977938544375, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 578, "num_deletes": 255, "total_data_size": 918917, "memory_usage": 931112, "flush_reason": "Manual Compaction"}
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977938549167, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 604618, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24528, "largest_seqno": 25101, "table_properties": {"data_size": 601700, "index_size": 890, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6559, "raw_average_key_size": 17, "raw_value_size": 595842, "raw_average_value_size": 1606, "num_data_blocks": 40, "num_entries": 371, "num_filter_entries": 371, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763977903, "oldest_key_time": 1763977903, "file_creation_time": 1763977938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 4885 microseconds, and 2534 cpu microseconds.
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.549206) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 604618 bytes OK
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.549222) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.550469) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.550484) EVENT_LOG_v1 {"time_micros": 1763977938550481, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.550509) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 915588, prev total WAL file size 915588, number of live WAL files 2.
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.550955) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(590KB)], [45(12MB)]
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977938550984, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 13494291, "oldest_snapshot_seqno": -1}
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 5418 keys, 13358487 bytes, temperature: kUnknown
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977938621612, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 13358487, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13322263, "index_size": 21586, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 138413, "raw_average_key_size": 25, "raw_value_size": 13224294, "raw_average_value_size": 2440, "num_data_blocks": 879, "num_entries": 5418, "num_filter_entries": 5418, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763977938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.621869) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13358487 bytes
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.623161) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 190.8 rd, 188.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 12.3 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(44.4) write-amplify(22.1) OK, records in: 5936, records dropped: 518 output_compression: NoCompression
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.623179) EVENT_LOG_v1 {"time_micros": 1763977938623172, "job": 26, "event": "compaction_finished", "compaction_time_micros": 70714, "compaction_time_cpu_micros": 24236, "output_level": 6, "num_output_files": 1, "total_output_size": 13358487, "num_input_records": 5936, "num_output_records": 5418, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977938623361, "job": 26, "event": "table_file_deletion", "file_number": 47}
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977938625706, "job": 26, "event": "table_file_deletion", "file_number": 45}
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.550866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.625820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.625826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.625828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.625830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:52:18 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.625832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:52:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:52:19 compute-1 ceph-mon[80009]: pgmap v753: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Nov 24 09:52:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:19.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:52:20.053 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:52:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:52:20.053 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:52:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:52:20.053 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:52:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:52:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:20.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:52:21 compute-1 ceph-mon[80009]: pgmap v754: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Nov 24 09:52:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:21.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:22.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:23 compute-1 ceph-mon[80009]: pgmap v755: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:52:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:52:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:23.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:52:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:24.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:52:25 compute-1 sudo[233590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:52:25 compute-1 sudo[233590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:52:25 compute-1 sudo[233590]: pam_unix(sudo:session): session closed for user root
Nov 24 09:52:25 compute-1 podman[233614]: 2025-11-24 09:52:25.474148869 +0000 UTC m=+0.102051677 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3)
Nov 24 09:52:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:25.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:25 compute-1 ceph-mon[80009]: pgmap v756: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Nov 24 09:52:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:26.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:27.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:27 compute-1 ceph-mon[80009]: pgmap v757: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Nov 24 09:52:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:52:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:28.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:52:29 compute-1 podman[233637]: 2025-11-24 09:52:29.342023754 +0000 UTC m=+0.082322383 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 24 09:52:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:52:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:52:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:29.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:52:29 compute-1 ceph-mon[80009]: pgmap v758: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 24 09:52:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:30.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:52:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:52:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:52:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:31.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:31 compute-1 ceph-mon[80009]: pgmap v759: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 24 09:52:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:52:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:32.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:52:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:33.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:52:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - - [24/Nov/2025:09:52:33.805 +0000] "GET /swift/info HTTP/1.1" 200 539 - "python-urllib3/1.26.5" - latency=0.001000024s
Nov 24 09:52:33 compute-1 ceph-mon[80009]: pgmap v760: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 09:52:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:52:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:34.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:52:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:52:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:35.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:35 compute-1 ceph-mon[80009]: pgmap v761: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Nov 24 09:52:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:36.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:37 compute-1 podman[233667]: 2025-11-24 09:52:37.318251786 +0000 UTC m=+0.049556202 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 24 09:52:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:37.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:37 compute-1 ceph-mon[80009]: pgmap v762: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Nov 24 09:52:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:38.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:38 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e151 e151: 3 total, 3 up, 3 in
Nov 24 09:52:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:52:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:39.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e152 e152: 3 total, 3 up, 3 in
Nov 24 09:52:39 compute-1 ceph-mon[80009]: pgmap v763: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Nov 24 09:52:39 compute-1 ceph-mon[80009]: osdmap e151: 3 total, 3 up, 3 in
Nov 24 09:52:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:52:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:40.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:52:40 compute-1 ceph-mon[80009]: osdmap e152: 3 total, 3 up, 3 in
Nov 24 09:52:40 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e153 e153: 3 total, 3 up, 3 in
Nov 24 09:52:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:41.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:41 compute-1 ceph-mon[80009]: pgmap v766: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Nov 24 09:52:41 compute-1 ceph-mon[80009]: osdmap e153: 3 total, 3 up, 3 in
Nov 24 09:52:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:52:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:42.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:52:43 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e154 e154: 3 total, 3 up, 3 in
Nov 24 09:52:43 compute-1 sudo[233689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:52:43 compute-1 sudo[233689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:52:43 compute-1 sudo[233689]: pam_unix(sudo:session): session closed for user root
Nov 24 09:52:43 compute-1 sudo[233714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:52:43 compute-1 sudo[233714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:52:43 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e155 e155: 3 total, 3 up, 3 in
Nov 24 09:52:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/095243 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:52:43 compute-1 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [ALERT] 327/095243 (4) : backend 'backend' has no server available!
Nov 24 09:52:43 compute-1 sudo[233714]: pam_unix(sudo:session): session closed for user root
Nov 24 09:52:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:43.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:43 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:52:43 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:52:43 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:52:43 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:52:43 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:52:43 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:52:43 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:52:43 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:52:43 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:52:43 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:52:43 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:52:43 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:52:44 compute-1 ceph-mon[80009]: pgmap v768: 353 pgs: 353 active+clean; 21 MiB data, 170 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 32 op/s
Nov 24 09:52:44 compute-1 ceph-mon[80009]: osdmap e154: 3 total, 3 up, 3 in
Nov 24 09:52:44 compute-1 ceph-mon[80009]: osdmap e155: 3 total, 3 up, 3 in
Nov 24 09:52:44 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:52:44 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:52:44 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:52:44 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:52:44 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:52:44 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:52:44 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:52:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:44.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:52:45 compute-1 ceph-mon[80009]: pgmap v771: 353 pgs: 353 active+clean; 21 MiB data, 170 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.2 MiB/s wr, 49 op/s
Nov 24 09:52:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:52:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:52:45 compute-1 sudo[233772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:52:45 compute-1 sudo[233772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:52:45 compute-1 sudo[233772]: pam_unix(sudo:session): session closed for user root
Nov 24 09:52:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:45.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:52:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:52:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:46.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:52:47 compute-1 ceph-mon[80009]: pgmap v772: 353 pgs: 353 active+clean; 21 MiB data, 170 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 3.9 MiB/s wr, 37 op/s
Nov 24 09:52:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:47.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:48 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:52:48 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:52:48 compute-1 sudo[233799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:52:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:48 compute-1 sudo[233799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:52:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:48.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:48 compute-1 sudo[233799]: pam_unix(sudo:session): session closed for user root
Nov 24 09:52:49 compute-1 ceph-mon[80009]: pgmap v773: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 5.9 MiB/s wr, 56 op/s
Nov 24 09:52:49 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:52:49 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:52:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:52:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:49.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:50.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:51 compute-1 ceph-mon[80009]: pgmap v774: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.8 MiB/s wr, 26 op/s
Nov 24 09:52:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:51.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:52:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:52.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:52:53 compute-1 ceph-mon[80009]: pgmap v775: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.3 MiB/s wr, 23 op/s
Nov 24 09:52:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:52:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:53.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:52:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:52:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:54.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:55 compute-1 ceph-mon[80009]: pgmap v776: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Nov 24 09:52:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:55.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:56 compute-1 podman[233828]: 2025-11-24 09:52:56.345026499 +0000 UTC m=+0.084675841 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 09:52:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:56.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:57 compute-1 ceph-mon[80009]: pgmap v777: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.7 MiB/s wr, 17 op/s
Nov 24 09:52:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:57.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:58.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:59 compute-1 ceph-mon[80009]: pgmap v778: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.7 MiB/s wr, 17 op/s
Nov 24 09:52:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:52:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:52:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:59.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:00.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:00 compute-1 podman[233850]: 2025-11-24 09:53:00.409198605 +0000 UTC m=+0.148538493 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:53:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:53:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:53:01 compute-1 ceph-mon[80009]: pgmap v779: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 170 B/s wr, 1 op/s
Nov 24 09:53:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:53:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/2050834204' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:53:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/2050834204' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:53:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:53:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:01.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:53:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:53:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:02.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:53:03 compute-1 ceph-mon[80009]: pgmap v780: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 170 B/s wr, 2 op/s
Nov 24 09:53:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:03.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:53:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:04.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:05 compute-1 ceph-mon[80009]: pgmap v781: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 85 B/s wr, 1 op/s
Nov 24 09:53:05 compute-1 sudo[233878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:53:05 compute-1 sudo[233878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:53:05 compute-1 sudo[233878]: pam_unix(sudo:session): session closed for user root
Nov 24 09:53:05 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:53:05.779 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:53:05 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:53:05.780 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 09:53:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:05.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:06.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:07 compute-1 ceph-mon[80009]: pgmap v782: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 85 B/s wr, 1 op/s
Nov 24 09:53:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:07.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:08 compute-1 podman[233905]: 2025-11-24 09:53:08.342434574 +0000 UTC m=+0.083543584 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 24 09:53:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:08.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:53:09 compute-1 ceph-mon[80009]: pgmap v783: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 85 B/s wr, 1 op/s
Nov 24 09:53:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:53:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:09.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:53:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:10.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:11 compute-1 ceph-mon[80009]: pgmap v784: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:53:11 compute-1 nova_compute[230010]: 2025-11-24 09:53:11.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:53:11 compute-1 nova_compute[230010]: 2025-11-24 09:53:11.788 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:53:11 compute-1 nova_compute[230010]: 2025-11-24 09:53:11.788 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:53:11 compute-1 nova_compute[230010]: 2025-11-24 09:53:11.789 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:53:11 compute-1 nova_compute[230010]: 2025-11-24 09:53:11.789 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:53:11 compute-1 nova_compute[230010]: 2025-11-24 09:53:11.790 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:53:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:11.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:53:12 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2058198224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:53:12 compute-1 nova_compute[230010]: 2025-11-24 09:53:12.227 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:53:12 compute-1 nova_compute[230010]: 2025-11-24 09:53:12.389 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:53:12 compute-1 nova_compute[230010]: 2025-11-24 09:53:12.390 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5299MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:53:12 compute-1 nova_compute[230010]: 2025-11-24 09:53:12.390 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:53:12 compute-1 nova_compute[230010]: 2025-11-24 09:53:12.391 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:53:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:12.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:12 compute-1 nova_compute[230010]: 2025-11-24 09:53:12.445 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:53:12 compute-1 nova_compute[230010]: 2025-11-24 09:53:12.446 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:53:12 compute-1 ceph-mon[80009]: pgmap v785: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:53:12 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2058198224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:53:12 compute-1 nova_compute[230010]: 2025-11-24 09:53:12.468 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:53:12 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:53:12.782 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:53:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:53:12 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4266170854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:53:12 compute-1 nova_compute[230010]: 2025-11-24 09:53:12.906 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:53:12 compute-1 nova_compute[230010]: 2025-11-24 09:53:12.912 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:53:12 compute-1 nova_compute[230010]: 2025-11-24 09:53:12.928 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:53:12 compute-1 nova_compute[230010]: 2025-11-24 09:53:12.930 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:53:12 compute-1 nova_compute[230010]: 2025-11-24 09:53:12.930 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:53:13 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/4266170854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:53:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:13.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:13 compute-1 nova_compute[230010]: 2025-11-24 09:53:13.931 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:53:13 compute-1 nova_compute[230010]: 2025-11-24 09:53:13.931 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:53:13 compute-1 nova_compute[230010]: 2025-11-24 09:53:13.932 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:53:13 compute-1 nova_compute[230010]: 2025-11-24 09:53:13.932 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:53:13 compute-1 nova_compute[230010]: 2025-11-24 09:53:13.932 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 09:53:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:53:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:14.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:14 compute-1 ceph-mon[80009]: pgmap v786: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:53:14 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3122627864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:53:14 compute-1 nova_compute[230010]: 2025-11-24 09:53:14.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:53:14 compute-1 nova_compute[230010]: 2025-11-24 09:53:14.766 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 09:53:14 compute-1 nova_compute[230010]: 2025-11-24 09:53:14.767 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 09:53:14 compute-1 nova_compute[230010]: 2025-11-24 09:53:14.784 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 09:53:14 compute-1 nova_compute[230010]: 2025-11-24 09:53:14.785 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:53:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:53:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:53:15 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1178473982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:53:15 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1187586974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:53:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:53:15 compute-1 nova_compute[230010]: 2025-11-24 09:53:15.778 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:53:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:15.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:16.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:16 compute-1 ceph-mon[80009]: pgmap v787: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:53:16 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1183407982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:53:16 compute-1 nova_compute[230010]: 2025-11-24 09:53:16.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:53:17 compute-1 nova_compute[230010]: 2025-11-24 09:53:17.760 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:53:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:17.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:18.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:18 compute-1 ceph-mon[80009]: pgmap v788: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:53:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:53:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:19.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:53:20.053 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:53:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:53:20.054 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:53:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:53:20.054 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:53:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:20.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:20 compute-1 ceph-mon[80009]: pgmap v789: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:53:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:53:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:21.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:53:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:22.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:22 compute-1 ceph-mon[80009]: pgmap v790: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:53:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:23.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:53:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:24.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:25 compute-1 ceph-mon[80009]: pgmap v791: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:53:25 compute-1 sudo[233976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:53:25 compute-1 sudo[233976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:53:25 compute-1 sudo[233976]: pam_unix(sudo:session): session closed for user root
Nov 24 09:53:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:53:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:25.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.041759) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978006041853, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 963, "num_deletes": 251, "total_data_size": 2118719, "memory_usage": 2158160, "flush_reason": "Manual Compaction"}
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978006052102, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 1394668, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25106, "largest_seqno": 26064, "table_properties": {"data_size": 1390172, "index_size": 2148, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9990, "raw_average_key_size": 19, "raw_value_size": 1381042, "raw_average_value_size": 2745, "num_data_blocks": 95, "num_entries": 503, "num_filter_entries": 503, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763977939, "oldest_key_time": 1763977939, "file_creation_time": 1763978006, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 10445 microseconds, and 5686 cpu microseconds.
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.052218) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 1394668 bytes OK
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.052262) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.053593) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.053609) EVENT_LOG_v1 {"time_micros": 1763978006053605, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.053627) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 2113863, prev total WAL file size 2113863, number of live WAL files 2.
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.054669) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(1361KB)], [48(12MB)]
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978006054761, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 14753155, "oldest_snapshot_seqno": -1}
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 5401 keys, 12547278 bytes, temperature: kUnknown
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978006127322, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 12547278, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12511769, "index_size": 20935, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 138783, "raw_average_key_size": 25, "raw_value_size": 12414574, "raw_average_value_size": 2298, "num_data_blocks": 848, "num_entries": 5401, "num_filter_entries": 5401, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763978006, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.127836) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12547278 bytes
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.138084) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.8 rd, 172.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 12.7 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(19.6) write-amplify(9.0) OK, records in: 5921, records dropped: 520 output_compression: NoCompression
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.138137) EVENT_LOG_v1 {"time_micros": 1763978006138117, "job": 28, "event": "compaction_finished", "compaction_time_micros": 72753, "compaction_time_cpu_micros": 40477, "output_level": 6, "num_output_files": 1, "total_output_size": 12547278, "num_input_records": 5921, "num_output_records": 5401, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978006138837, "job": 28, "event": "table_file_deletion", "file_number": 50}
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978006142817, "job": 28, "event": "table_file_deletion", "file_number": 48}
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.054494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.142872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.142880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.142881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.142883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:53:26 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.142886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:53:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:26.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:27 compute-1 ceph-mon[80009]: pgmap v792: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:53:27 compute-1 podman[234002]: 2025-11-24 09:53:27.317501114 +0000 UTC m=+0.061181936 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 24 09:53:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:27.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:28 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2047181630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:53:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:53:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:28.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:53:29 compute-1 ceph-mon[80009]: pgmap v793: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:53:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:53:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:29.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e156 e156: 3 total, 3 up, 3 in
Nov 24 09:53:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:53:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:53:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:30.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:31 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e157 e157: 3 total, 3 up, 3 in
Nov 24 09:53:31 compute-1 ceph-mon[80009]: pgmap v794: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:53:31 compute-1 ceph-mon[80009]: osdmap e156: 3 total, 3 up, 3 in
Nov 24 09:53:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:53:31 compute-1 podman[234024]: 2025-11-24 09:53:31.367890601 +0000 UTC m=+0.098670763 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 09:53:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:31.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:32 compute-1 ceph-mon[80009]: osdmap e157: 3 total, 3 up, 3 in
Nov 24 09:53:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:32.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:33 compute-1 ceph-mon[80009]: pgmap v797: 353 pgs: 353 active+clean; 88 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 52 op/s
Nov 24 09:53:33 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3757053087' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:53:33 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1655492014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:53:33 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 e158: 3 total, 3 up, 3 in
Nov 24 09:53:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:33.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:53:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:34.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:34 compute-1 ceph-mon[80009]: osdmap e158: 3 total, 3 up, 3 in
Nov 24 09:53:34 compute-1 ceph-mon[80009]: pgmap v799: 353 pgs: 353 active+clean; 88 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.5 MiB/s wr, 68 op/s
Nov 24 09:53:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:35.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:36.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:36 compute-1 ceph-mon[80009]: pgmap v800: 353 pgs: 353 active+clean; 88 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.5 MiB/s wr, 68 op/s
Nov 24 09:53:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:37.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:38.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:38 compute-1 ceph-mon[80009]: pgmap v801: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 53 op/s
Nov 24 09:53:39 compute-1 podman[234056]: 2025-11-24 09:53:39.306281837 +0000 UTC m=+0.051601763 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 24 09:53:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:53:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:39.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:40.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:40 compute-1 ceph-mon[80009]: pgmap v802: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.4 MiB/s wr, 47 op/s
Nov 24 09:53:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:41.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:42.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:43 compute-1 ceph-mon[80009]: pgmap v803: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 89 op/s
Nov 24 09:53:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:43.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:53:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:44.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:45 compute-1 ceph-mon[80009]: pgmap v804: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 15 KiB/s wr, 86 op/s
Nov 24 09:53:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:53:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:53:45 compute-1 sudo[234078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:53:45 compute-1 sudo[234078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:53:45 compute-1 sudo[234078]: pam_unix(sudo:session): session closed for user root
Nov 24 09:53:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:45.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:53:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:46.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:47 compute-1 ceph-mon[80009]: pgmap v805: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 24 09:53:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:47.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:48.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:48 compute-1 sudo[234105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:53:48 compute-1 sudo[234105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:53:48 compute-1 sudo[234105]: pam_unix(sudo:session): session closed for user root
Nov 24 09:53:48 compute-1 sudo[234130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:53:48 compute-1 sudo[234130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:53:49 compute-1 ceph-mon[80009]: pgmap v806: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Nov 24 09:53:49 compute-1 sudo[234130]: pam_unix(sudo:session): session closed for user root
Nov 24 09:53:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:53:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:53:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:49.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:53:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:53:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:50.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:53:50 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 24 09:53:51 compute-1 ceph-mon[80009]: pgmap v807: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 24 09:53:51 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:53:51 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:53:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:51.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:53:52 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:53:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:53:52 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:53:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:53:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:53:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:53:52 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:53:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:53:52 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:53:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:53:52 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:53:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:52.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:52 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:53:52 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:53:52 compute-1 ceph-mon[80009]: pgmap v808: 353 pgs: 353 active+clean; 113 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Nov 24 09:53:52 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:53:52 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:53:52 compute-1 ceph-mon[80009]: pgmap v809: 353 pgs: 353 active+clean; 113 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 372 KiB/s rd, 2.3 MiB/s wr, 59 op/s
Nov 24 09:53:52 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:53:52 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:53:52 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:53:52 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:53:52 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:53:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:53.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:53:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:54.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:54 compute-1 ceph-mon[80009]: pgmap v810: 353 pgs: 353 active+clean; 113 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 372 KiB/s rd, 2.3 MiB/s wr, 59 op/s
Nov 24 09:53:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:55.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:56.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:56 compute-1 ceph-mon[80009]: pgmap v811: 353 pgs: 353 active+clean; 113 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 372 KiB/s rd, 2.3 MiB/s wr, 59 op/s
Nov 24 09:53:56 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:53:56 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:53:57 compute-1 sudo[234189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:53:57 compute-1 sudo[234189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:53:57 compute-1 sudo[234189]: pam_unix(sudo:session): session closed for user root
Nov 24 09:53:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:57.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:57 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:53:57 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:53:58 compute-1 podman[234215]: 2025-11-24 09:53:58.326574732 +0000 UTC m=+0.062639352 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 09:53:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:58.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:58 compute-1 ceph-mon[80009]: pgmap v812: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 442 KiB/s rd, 2.4 MiB/s wr, 75 op/s
Nov 24 09:53:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:53:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:53:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:59.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:54:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:54:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:00.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:54:00 compute-1 ceph-mon[80009]: pgmap v813: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 442 KiB/s rd, 2.4 MiB/s wr, 75 op/s
Nov 24 09:54:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/1086416871' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:54:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/1086416871' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:54:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:01.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:02 compute-1 podman[234236]: 2025-11-24 09:54:02.341137235 +0000 UTC m=+0.086124826 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 09:54:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:02.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:02 compute-1 ceph-mon[80009]: pgmap v814: 353 pgs: 353 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 117 KiB/s wr, 18 op/s
Nov 24 09:54:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:03.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:54:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:04.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:04 compute-1 ceph-mon[80009]: pgmap v815: 353 pgs: 353 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 102 KiB/s wr, 16 op/s
Nov 24 09:54:05 compute-1 sudo[234263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:54:05 compute-1 sudo[234263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:54:05 compute-1 sudo[234263]: pam_unix(sudo:session): session closed for user root
Nov 24 09:54:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:05.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:06 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:06.405 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:54:06 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:06.406 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 09:54:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:06.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:06 compute-1 ceph-mon[80009]: pgmap v816: 353 pgs: 353 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 102 KiB/s wr, 16 op/s
Nov 24 09:54:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:07.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:08.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:08 compute-1 ceph-mon[80009]: pgmap v817: 353 pgs: 353 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 106 KiB/s wr, 17 op/s
Nov 24 09:54:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:54:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:09.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:10 compute-1 podman[234291]: 2025-11-24 09:54:10.324367135 +0000 UTC m=+0.057536918 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 24 09:54:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:10.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:10 compute-1 ceph-mon[80009]: pgmap v818: 353 pgs: 353 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 15 KiB/s wr, 2 op/s
Nov 24 09:54:11 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:11.407 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:54:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:11.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:12 compute-1 ceph-mon[80009]: pgmap v819: 353 pgs: 353 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 15 KiB/s wr, 2 op/s
Nov 24 09:54:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:12.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:12 compute-1 nova_compute[230010]: 2025-11-24 09:54:12.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:54:12 compute-1 nova_compute[230010]: 2025-11-24 09:54:12.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:54:12 compute-1 nova_compute[230010]: 2025-11-24 09:54:12.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 09:54:12 compute-1 nova_compute[230010]: 2025-11-24 09:54:12.801 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:54:12 compute-1 nova_compute[230010]: 2025-11-24 09:54:12.802 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:54:12 compute-1 nova_compute[230010]: 2025-11-24 09:54:12.813 230014 DEBUG nova.compute.manager [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 09:54:12 compute-1 nova_compute[230010]: 2025-11-24 09:54:12.891 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:54:12 compute-1 nova_compute[230010]: 2025-11-24 09:54:12.892 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:54:12 compute-1 nova_compute[230010]: 2025-11-24 09:54:12.899 230014 DEBUG nova.virt.hardware [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 09:54:12 compute-1 nova_compute[230010]: 2025-11-24 09:54:12.899 230014 INFO nova.compute.claims [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Claim successful on node compute-1.ctlplane.example.com
Nov 24 09:54:12 compute-1 nova_compute[230010]: 2025-11-24 09:54:12.993 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:54:13 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:54:13 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1658824246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:13 compute-1 nova_compute[230010]: 2025-11-24 09:54:13.413 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:54:13 compute-1 nova_compute[230010]: 2025-11-24 09:54:13.421 230014 DEBUG nova.compute.provider_tree [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:54:13 compute-1 nova_compute[230010]: 2025-11-24 09:54:13.440 230014 DEBUG nova.scheduler.client.report [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:54:13 compute-1 nova_compute[230010]: 2025-11-24 09:54:13.504 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:54:13 compute-1 nova_compute[230010]: 2025-11-24 09:54:13.504 230014 DEBUG nova.compute.manager [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 09:54:13 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1658824246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:13 compute-1 nova_compute[230010]: 2025-11-24 09:54:13.559 230014 DEBUG nova.compute.manager [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 09:54:13 compute-1 nova_compute[230010]: 2025-11-24 09:54:13.559 230014 DEBUG nova.network.neutron [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 09:54:13 compute-1 nova_compute[230010]: 2025-11-24 09:54:13.584 230014 INFO nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 09:54:13 compute-1 nova_compute[230010]: 2025-11-24 09:54:13.599 230014 DEBUG nova.compute.manager [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 09:54:13 compute-1 nova_compute[230010]: 2025-11-24 09:54:13.689 230014 DEBUG nova.compute.manager [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 09:54:13 compute-1 nova_compute[230010]: 2025-11-24 09:54:13.690 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 09:54:13 compute-1 nova_compute[230010]: 2025-11-24 09:54:13.691 230014 INFO nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Creating image(s)
Nov 24 09:54:13 compute-1 nova_compute[230010]: 2025-11-24 09:54:13.721 230014 DEBUG nova.storage.rbd_utils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:54:13 compute-1 nova_compute[230010]: 2025-11-24 09:54:13.752 230014 DEBUG nova.storage.rbd_utils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:54:13 compute-1 nova_compute[230010]: 2025-11-24 09:54:13.779 230014 DEBUG nova.storage.rbd_utils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:54:13 compute-1 nova_compute[230010]: 2025-11-24 09:54:13.781 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "2ed5c667523487159c4c4503c82babbc95dbae40" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:54:13 compute-1 nova_compute[230010]: 2025-11-24 09:54:13.782 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:54:13 compute-1 nova_compute[230010]: 2025-11-24 09:54:13.784 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:54:13 compute-1 nova_compute[230010]: 2025-11-24 09:54:13.785 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:54:13 compute-1 nova_compute[230010]: 2025-11-24 09:54:13.785 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:54:13 compute-1 nova_compute[230010]: 2025-11-24 09:54:13.799 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:54:13 compute-1 nova_compute[230010]: 2025-11-24 09:54:13.799 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:54:13 compute-1 nova_compute[230010]: 2025-11-24 09:54:13.799 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:54:13 compute-1 nova_compute[230010]: 2025-11-24 09:54:13.800 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:54:13 compute-1 nova_compute[230010]: 2025-11-24 09:54:13.800 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:54:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:13.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:14 compute-1 nova_compute[230010]: 2025-11-24 09:54:14.104 230014 DEBUG nova.virt.libvirt.imagebackend [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image locations are: [{'url': 'rbd://84a084c3-61a7-5de7-8207-1f88efa59a64/images/6ef14bdf-4f04-4400-8040-4409d9d5271e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://84a084c3-61a7-5de7-8207-1f88efa59a64/images/6ef14bdf-4f04-4400-8040-4409d9d5271e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 24 09:54:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:54:14 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4102294369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:14 compute-1 nova_compute[230010]: 2025-11-24 09:54:14.231 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:54:14 compute-1 nova_compute[230010]: 2025-11-24 09:54:14.332 230014 WARNING oslo_policy.policy [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 24 09:54:14 compute-1 nova_compute[230010]: 2025-11-24 09:54:14.333 230014 WARNING oslo_policy.policy [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 24 09:54:14 compute-1 nova_compute[230010]: 2025-11-24 09:54:14.335 230014 DEBUG nova.policy [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '43f79ff3105e4372a3c095e8057d4f1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '94d069fc040647d5a6e54894eec915fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 09:54:14 compute-1 nova_compute[230010]: 2025-11-24 09:54:14.377 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:54:14 compute-1 nova_compute[230010]: 2025-11-24 09:54:14.378 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5257MB free_disk=59.942718505859375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:54:14 compute-1 nova_compute[230010]: 2025-11-24 09:54:14.378 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:54:14 compute-1 nova_compute[230010]: 2025-11-24 09:54:14.378 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:54:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:54:14 compute-1 nova_compute[230010]: 2025-11-24 09:54:14.441 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Instance 4313a8bf-5a2a-4de5-84e7-ead18a049c18 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 09:54:14 compute-1 nova_compute[230010]: 2025-11-24 09:54:14.441 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:54:14 compute-1 nova_compute[230010]: 2025-11-24 09:54:14.442 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:54:14 compute-1 nova_compute[230010]: 2025-11-24 09:54:14.479 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:54:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.002000047s ======
Nov 24 09:54:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:14.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Nov 24 09:54:14 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/4102294369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:14 compute-1 ceph-mon[80009]: pgmap v820: 353 pgs: 353 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 3.3 KiB/s wr, 1 op/s
Nov 24 09:54:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:54:14 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3060430779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:14 compute-1 nova_compute[230010]: 2025-11-24 09:54:14.934 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:54:14 compute-1 nova_compute[230010]: 2025-11-24 09:54:14.953 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:54:14 compute-1 nova_compute[230010]: 2025-11-24 09:54:14.961 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating inventory in ProviderTree for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 09:54:14 compute-1 nova_compute[230010]: 2025-11-24 09:54:14.996 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40.part --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:54:14 compute-1 nova_compute[230010]: 2025-11-24 09:54:14.997 230014 DEBUG nova.virt.images [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] 6ef14bdf-4f04-4400-8040-4409d9d5271e was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 24 09:54:14 compute-1 nova_compute[230010]: 2025-11-24 09:54:14.999 230014 DEBUG nova.privsep.utils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 24 09:54:14 compute-1 nova_compute[230010]: 2025-11-24 09:54:14.999 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40.part /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.014 230014 ERROR nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [req-9793cf5d-762b-438f-baff-1525d77653cb] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 1b7b0f22-dba8-42a8-9de3-763c9152946e.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-9793cf5d-762b-438f-baff-1525d77653cb"}]}
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.030 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing inventories for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.048 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating ProviderTree inventory for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.048 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating inventory in ProviderTree for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.061 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing aggregate associations for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.080 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing trait associations for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e, traits: COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_F16C,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI2,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.112 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.151 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40.part /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40.converted" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.156 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.209 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40.converted --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.210 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.428s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.240 230014 DEBUG nova.storage.rbd_utils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.246 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.369 230014 DEBUG nova.network.neutron [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Successfully created port: 31962c69-e86c-4431-b40a-e84cb6d9b71d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 24 09:54:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:54:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.508 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:54:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:54:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2342644203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:15 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3060430779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:15 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3640733103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.570 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.576 230014 DEBUG nova.storage.rbd_utils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] resizing rbd image 4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.605 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating inventory in ProviderTree for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.667 230014 DEBUG nova.objects.instance [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'migration_context' on Instance uuid 4313a8bf-5a2a-4de5-84e7-ead18a049c18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.669 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updated inventory for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.669 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.669 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating inventory in ProviderTree for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.688 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.689 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Ensure instance console log exists: /var/lib/nova/instances/4313a8bf-5a2a-4de5-84e7-ead18a049c18/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.689 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.689 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.690 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.750 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:54:15 compute-1 nova_compute[230010]: 2025-11-24 09:54:15.750 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.372s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:54:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:15.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:16.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:16 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2342644203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:16 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2588724526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:16 compute-1 ceph-mon[80009]: pgmap v821: 353 pgs: 353 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 3.3 KiB/s wr, 1 op/s
Nov 24 09:54:16 compute-1 nova_compute[230010]: 2025-11-24 09:54:16.690 230014 DEBUG nova.network.neutron [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Successfully updated port: 31962c69-e86c-4431-b40a-e84cb6d9b71d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 09:54:16 compute-1 nova_compute[230010]: 2025-11-24 09:54:16.702 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-4313a8bf-5a2a-4de5-84e7-ead18a049c18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:54:16 compute-1 nova_compute[230010]: 2025-11-24 09:54:16.703 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-4313a8bf-5a2a-4de5-84e7-ead18a049c18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:54:16 compute-1 nova_compute[230010]: 2025-11-24 09:54:16.703 230014 DEBUG nova.network.neutron [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 09:54:16 compute-1 nova_compute[230010]: 2025-11-24 09:54:16.730 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:54:16 compute-1 nova_compute[230010]: 2025-11-24 09:54:16.731 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 09:54:16 compute-1 nova_compute[230010]: 2025-11-24 09:54:16.731 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 09:54:16 compute-1 nova_compute[230010]: 2025-11-24 09:54:16.751 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 24 09:54:16 compute-1 nova_compute[230010]: 2025-11-24 09:54:16.751 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 09:54:16 compute-1 nova_compute[230010]: 2025-11-24 09:54:16.752 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:54:16 compute-1 nova_compute[230010]: 2025-11-24 09:54:16.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:54:16 compute-1 nova_compute[230010]: 2025-11-24 09:54:16.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:54:16 compute-1 nova_compute[230010]: 2025-11-24 09:54:16.785 230014 DEBUG nova.compute.manager [req-b779d75a-6f69-4bca-b923-6ca261b611d4 req-12557be7-8c18-407f-9a5f-78e530f41f52 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Received event network-changed-31962c69-e86c-4431-b40a-e84cb6d9b71d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:54:16 compute-1 nova_compute[230010]: 2025-11-24 09:54:16.785 230014 DEBUG nova.compute.manager [req-b779d75a-6f69-4bca-b923-6ca261b611d4 req-12557be7-8c18-407f-9a5f-78e530f41f52 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Refreshing instance network info cache due to event network-changed-31962c69-e86c-4431-b40a-e84cb6d9b71d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 09:54:16 compute-1 nova_compute[230010]: 2025-11-24 09:54:16.785 230014 DEBUG oslo_concurrency.lockutils [req-b779d75a-6f69-4bca-b923-6ca261b611d4 req-12557be7-8c18-407f-9a5f-78e530f41f52 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-4313a8bf-5a2a-4de5-84e7-ead18a049c18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:54:16 compute-1 nova_compute[230010]: 2025-11-24 09:54:16.844 230014 DEBUG nova.network.neutron [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 09:54:17 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3524898620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:17 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1456248496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:17 compute-1 nova_compute[230010]: 2025-11-24 09:54:17.718 230014 DEBUG nova.network.neutron [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Updating instance_info_cache with network_info: [{"id": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "address": "fa:16:3e:69:46:4d", "network": {"id": "8e927f01-795d-4fd1-bd00-bd898db487a3", "bridge": "br-int", "label": "tempest-network-smoke--503305566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31962c69-e8", "ovs_interfaceid": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:54:17 compute-1 nova_compute[230010]: 2025-11-24 09:54:17.740 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-4313a8bf-5a2a-4de5-84e7-ead18a049c18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:54:17 compute-1 nova_compute[230010]: 2025-11-24 09:54:17.740 230014 DEBUG nova.compute.manager [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Instance network_info: |[{"id": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "address": "fa:16:3e:69:46:4d", "network": {"id": "8e927f01-795d-4fd1-bd00-bd898db487a3", "bridge": "br-int", "label": "tempest-network-smoke--503305566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31962c69-e8", "ovs_interfaceid": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 09:54:17 compute-1 nova_compute[230010]: 2025-11-24 09:54:17.740 230014 DEBUG oslo_concurrency.lockutils [req-b779d75a-6f69-4bca-b923-6ca261b611d4 req-12557be7-8c18-407f-9a5f-78e530f41f52 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-4313a8bf-5a2a-4de5-84e7-ead18a049c18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:54:17 compute-1 nova_compute[230010]: 2025-11-24 09:54:17.741 230014 DEBUG nova.network.neutron [req-b779d75a-6f69-4bca-b923-6ca261b611d4 req-12557be7-8c18-407f-9a5f-78e530f41f52 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Refreshing network info cache for port 31962c69-e86c-4431-b40a-e84cb6d9b71d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 09:54:17 compute-1 nova_compute[230010]: 2025-11-24 09:54:17.743 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Start _get_guest_xml network_info=[{"id": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "address": "fa:16:3e:69:46:4d", "network": {"id": "8e927f01-795d-4fd1-bd00-bd898db487a3", "bridge": "br-int", "label": "tempest-network-smoke--503305566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31962c69-e8", "ovs_interfaceid": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '6ef14bdf-4f04-4400-8040-4409d9d5271e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 09:54:17 compute-1 nova_compute[230010]: 2025-11-24 09:54:17.747 230014 WARNING nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:54:17 compute-1 nova_compute[230010]: 2025-11-24 09:54:17.753 230014 DEBUG nova.virt.libvirt.host [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 09:54:17 compute-1 nova_compute[230010]: 2025-11-24 09:54:17.753 230014 DEBUG nova.virt.libvirt.host [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 09:54:17 compute-1 nova_compute[230010]: 2025-11-24 09:54:17.759 230014 DEBUG nova.virt.libvirt.host [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 09:54:17 compute-1 nova_compute[230010]: 2025-11-24 09:54:17.760 230014 DEBUG nova.virt.libvirt.host [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 09:54:17 compute-1 nova_compute[230010]: 2025-11-24 09:54:17.760 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 09:54:17 compute-1 nova_compute[230010]: 2025-11-24 09:54:17.761 230014 DEBUG nova.virt.hardware [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T09:52:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='4a5d03ad-925b-45f1-89bd-f1325f9f3292',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 09:54:17 compute-1 nova_compute[230010]: 2025-11-24 09:54:17.761 230014 DEBUG nova.virt.hardware [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 09:54:17 compute-1 nova_compute[230010]: 2025-11-24 09:54:17.761 230014 DEBUG nova.virt.hardware [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 09:54:17 compute-1 nova_compute[230010]: 2025-11-24 09:54:17.761 230014 DEBUG nova.virt.hardware [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 09:54:17 compute-1 nova_compute[230010]: 2025-11-24 09:54:17.762 230014 DEBUG nova.virt.hardware [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 09:54:17 compute-1 nova_compute[230010]: 2025-11-24 09:54:17.762 230014 DEBUG nova.virt.hardware [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 09:54:17 compute-1 nova_compute[230010]: 2025-11-24 09:54:17.762 230014 DEBUG nova.virt.hardware [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 09:54:17 compute-1 nova_compute[230010]: 2025-11-24 09:54:17.762 230014 DEBUG nova.virt.hardware [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 09:54:17 compute-1 nova_compute[230010]: 2025-11-24 09:54:17.762 230014 DEBUG nova.virt.hardware [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 09:54:17 compute-1 nova_compute[230010]: 2025-11-24 09:54:17.763 230014 DEBUG nova.virt.hardware [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 09:54:17 compute-1 nova_compute[230010]: 2025-11-24 09:54:17.763 230014 DEBUG nova.virt.hardware [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 09:54:17 compute-1 nova_compute[230010]: 2025-11-24 09:54:17.766 230014 DEBUG nova.privsep.utils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 24 09:54:17 compute-1 nova_compute[230010]: 2025-11-24 09:54:17.767 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:54:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:17.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:18 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 09:54:18 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2473323629' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.232 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.258 230014 DEBUG nova.storage.rbd_utils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.261 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:54:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:18.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:18 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3790276299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:18 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2473323629' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:54:18 compute-1 ceph-mon[80009]: pgmap v822: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 24 09:54:18 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 09:54:18 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3807305221' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.687 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.689 230014 DEBUG nova.virt.libvirt.vif [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T09:54:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-904956127',display_name='tempest-TestNetworkBasicOps-server-904956127',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-904956127',id=2,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBk6JkAuduqittiDGA4pBhSCzmrjSnKU2daRXm5XDAaZpUlbHNfHVDmOyWJWR78b4GrvBoMlHYEMPqcBJQA/sKOhpsOzfkRRFgAuDlkN09WkiLcyZB4s6iUYsG2XLZZzXw==',key_name='tempest-TestNetworkBasicOps-831372657',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-gukftcea',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T09:54:13Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=4313a8bf-5a2a-4de5-84e7-ead18a049c18,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "address": "fa:16:3e:69:46:4d", "network": {"id": "8e927f01-795d-4fd1-bd00-bd898db487a3", "bridge": "br-int", "label": "tempest-network-smoke--503305566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31962c69-e8", "ovs_interfaceid": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.689 230014 DEBUG nova.network.os_vif_util [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "address": "fa:16:3e:69:46:4d", "network": {"id": "8e927f01-795d-4fd1-bd00-bd898db487a3", "bridge": "br-int", "label": "tempest-network-smoke--503305566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31962c69-e8", "ovs_interfaceid": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.690 230014 DEBUG nova.network.os_vif_util [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:46:4d,bridge_name='br-int',has_traffic_filtering=True,id=31962c69-e86c-4431-b40a-e84cb6d9b71d,network=Network(8e927f01-795d-4fd1-bd00-bd898db487a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31962c69-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.692 230014 DEBUG nova.objects.instance [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'pci_devices' on Instance uuid 4313a8bf-5a2a-4de5-84e7-ead18a049c18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.705 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] End _get_guest_xml xml=<domain type="kvm">
Nov 24 09:54:18 compute-1 nova_compute[230010]:   <uuid>4313a8bf-5a2a-4de5-84e7-ead18a049c18</uuid>
Nov 24 09:54:18 compute-1 nova_compute[230010]:   <name>instance-00000002</name>
Nov 24 09:54:18 compute-1 nova_compute[230010]:   <memory>131072</memory>
Nov 24 09:54:18 compute-1 nova_compute[230010]:   <vcpu>1</vcpu>
Nov 24 09:54:18 compute-1 nova_compute[230010]:   <metadata>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <nova:name>tempest-TestNetworkBasicOps-server-904956127</nova:name>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <nova:creationTime>2025-11-24 09:54:17</nova:creationTime>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <nova:flavor name="m1.nano">
Nov 24 09:54:18 compute-1 nova_compute[230010]:         <nova:memory>128</nova:memory>
Nov 24 09:54:18 compute-1 nova_compute[230010]:         <nova:disk>1</nova:disk>
Nov 24 09:54:18 compute-1 nova_compute[230010]:         <nova:swap>0</nova:swap>
Nov 24 09:54:18 compute-1 nova_compute[230010]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 09:54:18 compute-1 nova_compute[230010]:         <nova:vcpus>1</nova:vcpus>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       </nova:flavor>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <nova:owner>
Nov 24 09:54:18 compute-1 nova_compute[230010]:         <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 09:54:18 compute-1 nova_compute[230010]:         <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       </nova:owner>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <nova:ports>
Nov 24 09:54:18 compute-1 nova_compute[230010]:         <nova:port uuid="31962c69-e86c-4431-b40a-e84cb6d9b71d">
Nov 24 09:54:18 compute-1 nova_compute[230010]:           <nova:ip type="fixed" address="10.100.0.22" ipVersion="4"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:         </nova:port>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       </nova:ports>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     </nova:instance>
Nov 24 09:54:18 compute-1 nova_compute[230010]:   </metadata>
Nov 24 09:54:18 compute-1 nova_compute[230010]:   <sysinfo type="smbios">
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <system>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <entry name="manufacturer">RDO</entry>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <entry name="product">OpenStack Compute</entry>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <entry name="serial">4313a8bf-5a2a-4de5-84e7-ead18a049c18</entry>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <entry name="uuid">4313a8bf-5a2a-4de5-84e7-ead18a049c18</entry>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <entry name="family">Virtual Machine</entry>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     </system>
Nov 24 09:54:18 compute-1 nova_compute[230010]:   </sysinfo>
Nov 24 09:54:18 compute-1 nova_compute[230010]:   <os>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <boot dev="hd"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <smbios mode="sysinfo"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:   </os>
Nov 24 09:54:18 compute-1 nova_compute[230010]:   <features>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <acpi/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <apic/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <vmcoreinfo/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:   </features>
Nov 24 09:54:18 compute-1 nova_compute[230010]:   <clock offset="utc">
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <timer name="hpet" present="no"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:   </clock>
Nov 24 09:54:18 compute-1 nova_compute[230010]:   <cpu mode="host-model" match="exact">
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:   </cpu>
Nov 24 09:54:18 compute-1 nova_compute[230010]:   <devices>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <disk type="network" device="disk">
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <driver type="raw" cache="none"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <source protocol="rbd" name="vms/4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk">
Nov 24 09:54:18 compute-1 nova_compute[230010]:         <host name="192.168.122.100" port="6789"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:         <host name="192.168.122.102" port="6789"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:         <host name="192.168.122.101" port="6789"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       </source>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <auth username="openstack">
Nov 24 09:54:18 compute-1 nova_compute[230010]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       </auth>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <target dev="vda" bus="virtio"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     </disk>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <disk type="network" device="cdrom">
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <driver type="raw" cache="none"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <source protocol="rbd" name="vms/4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk.config">
Nov 24 09:54:18 compute-1 nova_compute[230010]:         <host name="192.168.122.100" port="6789"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:         <host name="192.168.122.102" port="6789"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:         <host name="192.168.122.101" port="6789"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       </source>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <auth username="openstack">
Nov 24 09:54:18 compute-1 nova_compute[230010]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       </auth>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <target dev="sda" bus="sata"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     </disk>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <interface type="ethernet">
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <mac address="fa:16:3e:69:46:4d"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <model type="virtio"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <mtu size="1442"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <target dev="tap31962c69-e8"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     </interface>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <serial type="pty">
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <log file="/var/lib/nova/instances/4313a8bf-5a2a-4de5-84e7-ead18a049c18/console.log" append="off"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     </serial>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <video>
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <model type="virtio"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     </video>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <input type="tablet" bus="usb"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <rng model="virtio">
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <backend model="random">/dev/urandom</backend>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     </rng>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <controller type="usb" index="0"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     <memballoon model="virtio">
Nov 24 09:54:18 compute-1 nova_compute[230010]:       <stats period="10"/>
Nov 24 09:54:18 compute-1 nova_compute[230010]:     </memballoon>
Nov 24 09:54:18 compute-1 nova_compute[230010]:   </devices>
Nov 24 09:54:18 compute-1 nova_compute[230010]: </domain>
Nov 24 09:54:18 compute-1 nova_compute[230010]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.706 230014 DEBUG nova.compute.manager [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Preparing to wait for external event network-vif-plugged-31962c69-e86c-4431-b40a-e84cb6d9b71d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.706 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.707 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.707 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.707 230014 DEBUG nova.virt.libvirt.vif [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T09:54:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-904956127',display_name='tempest-TestNetworkBasicOps-server-904956127',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-904956127',id=2,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBk6JkAuduqittiDGA4pBhSCzmrjSnKU2daRXm5XDAaZpUlbHNfHVDmOyWJWR78b4GrvBoMlHYEMPqcBJQA/sKOhpsOzfkRRFgAuDlkN09WkiLcyZB4s6iUYsG2XLZZzXw==',key_name='tempest-TestNetworkBasicOps-831372657',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-gukftcea',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T09:54:13Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=4313a8bf-5a2a-4de5-84e7-ead18a049c18,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "address": "fa:16:3e:69:46:4d", "network": {"id": "8e927f01-795d-4fd1-bd00-bd898db487a3", "bridge": "br-int", "label": "tempest-network-smoke--503305566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31962c69-e8", "ovs_interfaceid": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.708 230014 DEBUG nova.network.os_vif_util [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "address": "fa:16:3e:69:46:4d", "network": {"id": "8e927f01-795d-4fd1-bd00-bd898db487a3", "bridge": "br-int", "label": "tempest-network-smoke--503305566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31962c69-e8", "ovs_interfaceid": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.708 230014 DEBUG nova.network.os_vif_util [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:46:4d,bridge_name='br-int',has_traffic_filtering=True,id=31962c69-e86c-4431-b40a-e84cb6d9b71d,network=Network(8e927f01-795d-4fd1-bd00-bd898db487a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31962c69-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.709 230014 DEBUG os_vif [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:46:4d,bridge_name='br-int',has_traffic_filtering=True,id=31962c69-e86c-4431-b40a-e84cb6d9b71d,network=Network(8e927f01-795d-4fd1-bd00-bd898db487a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31962c69-e8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.745 230014 DEBUG ovsdbapp.backend.ovs_idl [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.745 230014 DEBUG ovsdbapp.backend.ovs_idl [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.746 230014 DEBUG ovsdbapp.backend.ovs_idl [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.746 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.747 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.747 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.747 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.750 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.752 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.763 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.763 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.763 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.764 230014 INFO oslo.privsep.daemon [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmplqc3st4j/privsep.sock']
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.808 230014 DEBUG nova.network.neutron [req-b779d75a-6f69-4bca-b923-6ca261b611d4 req-12557be7-8c18-407f-9a5f-78e530f41f52 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Updated VIF entry in instance network info cache for port 31962c69-e86c-4431-b40a-e84cb6d9b71d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.809 230014 DEBUG nova.network.neutron [req-b779d75a-6f69-4bca-b923-6ca261b611d4 req-12557be7-8c18-407f-9a5f-78e530f41f52 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Updating instance_info_cache with network_info: [{"id": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "address": "fa:16:3e:69:46:4d", "network": {"id": "8e927f01-795d-4fd1-bd00-bd898db487a3", "bridge": "br-int", "label": "tempest-network-smoke--503305566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31962c69-e8", "ovs_interfaceid": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:54:18 compute-1 nova_compute[230010]: 2025-11-24 09:54:18.822 230014 DEBUG oslo_concurrency.lockutils [req-b779d75a-6f69-4bca-b923-6ca261b611d4 req-12557be7-8c18-407f-9a5f-78e530f41f52 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-4313a8bf-5a2a-4de5-84e7-ead18a049c18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:54:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:54:19 compute-1 nova_compute[230010]: 2025-11-24 09:54:19.437 230014 INFO oslo.privsep.daemon [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Spawned new privsep daemon via rootwrap
Nov 24 09:54:19 compute-1 nova_compute[230010]: 2025-11-24 09:54:19.316 234647 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 24 09:54:19 compute-1 nova_compute[230010]: 2025-11-24 09:54:19.323 234647 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 24 09:54:19 compute-1 nova_compute[230010]: 2025-11-24 09:54:19.327 234647 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Nov 24 09:54:19 compute-1 nova_compute[230010]: 2025-11-24 09:54:19.327 234647 INFO oslo.privsep.daemon [-] privsep daemon running as pid 234647
Nov 24 09:54:19 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3807305221' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:54:19 compute-1 nova_compute[230010]: 2025-11-24 09:54:19.768 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:19 compute-1 nova_compute[230010]: 2025-11-24 09:54:19.769 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31962c69-e8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:54:19 compute-1 nova_compute[230010]: 2025-11-24 09:54:19.770 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31962c69-e8, col_values=(('external_ids', {'iface-id': '31962c69-e86c-4431-b40a-e84cb6d9b71d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:46:4d', 'vm-uuid': '4313a8bf-5a2a-4de5-84e7-ead18a049c18'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:54:19 compute-1 NetworkManager[48870]: <info>  [1763978059.7732] manager: (tap31962c69-e8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Nov 24 09:54:19 compute-1 nova_compute[230010]: 2025-11-24 09:54:19.773 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:19 compute-1 nova_compute[230010]: 2025-11-24 09:54:19.778 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 09:54:19 compute-1 nova_compute[230010]: 2025-11-24 09:54:19.779 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:19 compute-1 nova_compute[230010]: 2025-11-24 09:54:19.780 230014 INFO os_vif [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:46:4d,bridge_name='br-int',has_traffic_filtering=True,id=31962c69-e86c-4431-b40a-e84cb6d9b71d,network=Network(8e927f01-795d-4fd1-bd00-bd898db487a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31962c69-e8')
Nov 24 09:54:19 compute-1 nova_compute[230010]: 2025-11-24 09:54:19.815 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 09:54:19 compute-1 nova_compute[230010]: 2025-11-24 09:54:19.816 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 09:54:19 compute-1 nova_compute[230010]: 2025-11-24 09:54:19.816 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:69:46:4d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 09:54:19 compute-1 nova_compute[230010]: 2025-11-24 09:54:19.816 230014 INFO nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Using config drive
Nov 24 09:54:19 compute-1 nova_compute[230010]: 2025-11-24 09:54:19.840 230014 DEBUG nova.storage.rbd_utils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:54:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:19.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:20.054 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:54:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:20.055 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:54:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:20.055 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:54:20 compute-1 nova_compute[230010]: 2025-11-24 09:54:20.395 230014 INFO nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Creating config drive at /var/lib/nova/instances/4313a8bf-5a2a-4de5-84e7-ead18a049c18/disk.config
Nov 24 09:54:20 compute-1 nova_compute[230010]: 2025-11-24 09:54:20.405 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4313a8bf-5a2a-4de5-84e7-ead18a049c18/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv2m5d5ro execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:54:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:20.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:20 compute-1 nova_compute[230010]: 2025-11-24 09:54:20.559 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4313a8bf-5a2a-4de5-84e7-ead18a049c18/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv2m5d5ro" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:54:20 compute-1 nova_compute[230010]: 2025-11-24 09:54:20.596 230014 DEBUG nova.storage.rbd_utils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:54:20 compute-1 nova_compute[230010]: 2025-11-24 09:54:20.601 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4313a8bf-5a2a-4de5-84e7-ead18a049c18/disk.config 4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:54:20 compute-1 ceph-mon[80009]: pgmap v823: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 24 09:54:20 compute-1 nova_compute[230010]: 2025-11-24 09:54:20.637 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:20 compute-1 nova_compute[230010]: 2025-11-24 09:54:20.761 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4313a8bf-5a2a-4de5-84e7-ead18a049c18/disk.config 4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:54:20 compute-1 nova_compute[230010]: 2025-11-24 09:54:20.763 230014 INFO nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Deleting local config drive /var/lib/nova/instances/4313a8bf-5a2a-4de5-84e7-ead18a049c18/disk.config because it was imported into RBD.
Nov 24 09:54:20 compute-1 systemd[1]: Starting libvirt secret daemon...
Nov 24 09:54:20 compute-1 systemd[1]: Started libvirt secret daemon.
Nov 24 09:54:20 compute-1 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 24 09:54:20 compute-1 kernel: tap31962c69-e8: entered promiscuous mode
Nov 24 09:54:20 compute-1 NetworkManager[48870]: <info>  [1763978060.8765] manager: (tap31962c69-e8): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Nov 24 09:54:20 compute-1 ovn_controller[132966]: 2025-11-24T09:54:20Z|00027|binding|INFO|Claiming lport 31962c69-e86c-4431-b40a-e84cb6d9b71d for this chassis.
Nov 24 09:54:20 compute-1 ovn_controller[132966]: 2025-11-24T09:54:20Z|00028|binding|INFO|31962c69-e86c-4431-b40a-e84cb6d9b71d: Claiming fa:16:3e:69:46:4d 10.100.0.22
Nov 24 09:54:20 compute-1 systemd-udevd[234744]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 09:54:20 compute-1 nova_compute[230010]: 2025-11-24 09:54:20.921 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:20.932 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:46:4d 10.100.0.22'], port_security=['fa:16:3e:69:46:4d 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': '4313a8bf-5a2a-4de5-84e7-ead18a049c18', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e927f01-795d-4fd1-bd00-bd898db487a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '841654bd-af9d-487b-9d46-e948edd0e4cf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12eb72db-6a1a-4bb9-9912-1e510973ae62, chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=31962c69-e86c-4431-b40a-e84cb6d9b71d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:54:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:20.933 142336 INFO neutron.agent.ovn.metadata.agent [-] Port 31962c69-e86c-4431-b40a-e84cb6d9b71d in datapath 8e927f01-795d-4fd1-bd00-bd898db487a3 bound to our chassis
Nov 24 09:54:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:20.935 142336 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e927f01-795d-4fd1-bd00-bd898db487a3
Nov 24 09:54:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:20.936 142336 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp_ggsxi2p/privsep.sock']
Nov 24 09:54:20 compute-1 NetworkManager[48870]: <info>  [1763978060.9410] device (tap31962c69-e8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 09:54:20 compute-1 NetworkManager[48870]: <info>  [1763978060.9422] device (tap31962c69-e8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 09:54:20 compute-1 systemd-machined[193537]: New machine qemu-1-instance-00000002.
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:20.997 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:21 compute-1 ovn_controller[132966]: 2025-11-24T09:54:21Z|00029|binding|INFO|Setting lport 31962c69-e86c-4431-b40a-e84cb6d9b71d ovn-installed in OVS
Nov 24 09:54:21 compute-1 ovn_controller[132966]: 2025-11-24T09:54:21Z|00030|binding|INFO|Setting lport 31962c69-e86c-4431-b40a-e84cb6d9b71d up in Southbound
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.004 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:21 compute-1 systemd[1]: Started Virtual Machine qemu-1-instance-00000002.
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.341 230014 DEBUG nova.compute.manager [req-43f4ae09-12ec-4d99-aead-7815a76b3a18 req-e89f55c4-febb-46e2-8589-bb96bd760c8c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Received event network-vif-plugged-31962c69-e86c-4431-b40a-e84cb6d9b71d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.342 230014 DEBUG oslo_concurrency.lockutils [req-43f4ae09-12ec-4d99-aead-7815a76b3a18 req-e89f55c4-febb-46e2-8589-bb96bd760c8c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.342 230014 DEBUG oslo_concurrency.lockutils [req-43f4ae09-12ec-4d99-aead-7815a76b3a18 req-e89f55c4-febb-46e2-8589-bb96bd760c8c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.342 230014 DEBUG oslo_concurrency.lockutils [req-43f4ae09-12ec-4d99-aead-7815a76b3a18 req-e89f55c4-febb-46e2-8589-bb96bd760c8c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.343 230014 DEBUG nova.compute.manager [req-43f4ae09-12ec-4d99-aead-7815a76b3a18 req-e89f55c4-febb-46e2-8589-bb96bd760c8c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Processing event network-vif-plugged-31962c69-e86c-4431-b40a-e84cb6d9b71d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.395 230014 DEBUG nova.compute.manager [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.397 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978061.3949986, 4313a8bf-5a2a-4de5-84e7-ead18a049c18 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.397 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] VM Started (Lifecycle Event)
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.409 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.414 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.418 230014 INFO nova.virt.libvirt.driver [-] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Instance spawned successfully.
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.419 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.422 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.440 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.441 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978061.3962986, 4313a8bf-5a2a-4de5-84e7-ead18a049c18 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.441 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] VM Paused (Lifecycle Event)
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.446 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.446 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.446 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.447 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.447 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.447 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.468 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.472 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978061.399304, 4313a8bf-5a2a-4de5-84e7-ead18a049c18 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.473 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] VM Resumed (Lifecycle Event)
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.505 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.509 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.514 230014 INFO nova.compute.manager [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Took 7.82 seconds to spawn the instance on the hypervisor.
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.515 230014 DEBUG nova.compute.manager [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.541 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.578 230014 INFO nova.compute.manager [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Took 8.72 seconds to build instance.
Nov 24 09:54:21 compute-1 nova_compute[230010]: 2025-11-24 09:54:21.591 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:54:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:21.648 142336 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 24 09:54:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:21.649 142336 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp_ggsxi2p/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 24 09:54:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:21.480 234803 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 24 09:54:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:21.485 234803 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 24 09:54:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:21.487 234803 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Nov 24 09:54:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:21.487 234803 INFO oslo.privsep.daemon [-] privsep daemon running as pid 234803
Nov 24 09:54:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:21.652 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[9e8b998d-7051-4c15-b96e-5369a3e55995]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:21.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:22 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:22.494 234803 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:54:22 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:22.495 234803 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:54:22 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:22.495 234803 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:54:22 compute-1 ceph-mon[80009]: pgmap v824: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 45 op/s
Nov 24 09:54:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:22.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:23 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:23.383 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[84832ca2-1cd6-4918-a57c-fc019f314f36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:23 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:23.384 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8e927f01-71 in ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 24 09:54:23 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:23.386 234803 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8e927f01-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 24 09:54:23 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:23.387 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[e8790529-9e2a-469b-9932-2b8676c57d67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:23 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:23.392 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[3d80801f-b9b9-498b-80b8-371608f85b38]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:23 compute-1 nova_compute[230010]: 2025-11-24 09:54:23.414 230014 DEBUG nova.compute.manager [req-890fe122-4944-4f89-8a9b-9c557ba308d1 req-00ca6d6e-c7b2-459f-869d-56cd636f6465 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Received event network-vif-plugged-31962c69-e86c-4431-b40a-e84cb6d9b71d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:54:23 compute-1 nova_compute[230010]: 2025-11-24 09:54:23.414 230014 DEBUG oslo_concurrency.lockutils [req-890fe122-4944-4f89-8a9b-9c557ba308d1 req-00ca6d6e-c7b2-459f-869d-56cd636f6465 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:54:23 compute-1 nova_compute[230010]: 2025-11-24 09:54:23.414 230014 DEBUG oslo_concurrency.lockutils [req-890fe122-4944-4f89-8a9b-9c557ba308d1 req-00ca6d6e-c7b2-459f-869d-56cd636f6465 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:54:23 compute-1 nova_compute[230010]: 2025-11-24 09:54:23.414 230014 DEBUG oslo_concurrency.lockutils [req-890fe122-4944-4f89-8a9b-9c557ba308d1 req-00ca6d6e-c7b2-459f-869d-56cd636f6465 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:54:23 compute-1 nova_compute[230010]: 2025-11-24 09:54:23.415 230014 DEBUG nova.compute.manager [req-890fe122-4944-4f89-8a9b-9c557ba308d1 req-00ca6d6e-c7b2-459f-869d-56cd636f6465 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] No waiting events found dispatching network-vif-plugged-31962c69-e86c-4431-b40a-e84cb6d9b71d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 09:54:23 compute-1 nova_compute[230010]: 2025-11-24 09:54:23.415 230014 WARNING nova.compute.manager [req-890fe122-4944-4f89-8a9b-9c557ba308d1 req-00ca6d6e-c7b2-459f-869d-56cd636f6465 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Received unexpected event network-vif-plugged-31962c69-e86c-4431-b40a-e84cb6d9b71d for instance with vm_state active and task_state None.
Nov 24 09:54:23 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:23.415 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[bae98a46-4829-43d1-9939-c333f24a8b37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:23 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:23.442 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[fc81fa67-0d2d-45ab-b2ce-6188a26684e1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:23 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:23.445 142336 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp6cb65j3w/privsep.sock']
Nov 24 09:54:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:23.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:24 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:24.125 142336 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 24 09:54:24 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:24.126 142336 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp6cb65j3w/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 24 09:54:24 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:24.002 234819 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 24 09:54:24 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:24.007 234819 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 24 09:54:24 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:24.008 234819 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 24 09:54:24 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:24.009 234819 INFO oslo.privsep.daemon [-] privsep daemon running as pid 234819
Nov 24 09:54:24 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:24.128 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[7255dba9-e7b5-4265-93c6-bb7d3774be5d]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:54:24 compute-1 ceph-mon[80009]: pgmap v825: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 24 09:54:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:24.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:24 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:24.633 234819 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:54:24 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:24.633 234819 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:54:24 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:24.633 234819 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:54:24 compute-1 nova_compute[230010]: 2025-11-24 09:54:24.772 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.193 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[6ec58f1e-24bc-4b76-a32c-37937cbe4c33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.208 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[20697b15-2abf-496a-b9f2-a1400e454a2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:25 compute-1 NetworkManager[48870]: <info>  [1763978065.2109] manager: (tap8e927f01-70): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.235 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[ab014e6f-f54a-428a-bd05-922b43b34707]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:25 compute-1 systemd-udevd[234831]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.239 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[66469ccf-6a12-423a-b9fc-4e1b7d7e919f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:25 compute-1 NetworkManager[48870]: <info>  [1763978065.2606] device (tap8e927f01-70): carrier: link connected
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.265 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[e7feb07f-8fb2-4ffe-8de2-b0b3cdc575f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.281 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[661a84b4-ad31-4db5-9b93-7ae2997220ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e927f01-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5a:18:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400881, 'reachable_time': 36130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 234849, 'error': None, 'target': 'ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.296 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[fd1243a4-a9cd-44ad-b0b9-9aef4b3c6094]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5a:1804'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 400881, 'tstamp': 400881}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 234850, 'error': None, 'target': 'ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.310 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[6b42a785-c8c3-4abe-8cc0-a1eef06d847d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e927f01-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5a:18:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400881, 'reachable_time': 36130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 234851, 'error': None, 'target': 'ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.333 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c3d24b-7fc9-4792-a3bf-4906bfbab217]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.382 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[697cca9e-e99d-4173-bea8-a7dc291b4816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.384 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e927f01-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.384 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.385 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e927f01-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:54:25 compute-1 nova_compute[230010]: 2025-11-24 09:54:25.387 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:25 compute-1 NetworkManager[48870]: <info>  [1763978065.3889] manager: (tap8e927f01-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Nov 24 09:54:25 compute-1 kernel: tap8e927f01-70: entered promiscuous mode
Nov 24 09:54:25 compute-1 nova_compute[230010]: 2025-11-24 09:54:25.389 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.390 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e927f01-70, col_values=(('external_ids', {'iface-id': 'a1fde06e-6df3-4ca6-8746-8510f661dd46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:54:25 compute-1 ovn_controller[132966]: 2025-11-24T09:54:25Z|00031|binding|INFO|Releasing lport a1fde06e-6df3-4ca6-8746-8510f661dd46 from this chassis (sb_readonly=0)
Nov 24 09:54:25 compute-1 nova_compute[230010]: 2025-11-24 09:54:25.392 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:25 compute-1 nova_compute[230010]: 2025-11-24 09:54:25.404 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.406 142336 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8e927f01-795d-4fd1-bd00-bd898db487a3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8e927f01-795d-4fd1-bd00-bd898db487a3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.407 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[f28e93be-8294-47d7-a365-b4a778d0a378]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.408 142336 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]: global
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]:     log         /dev/log local0 debug
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]:     log-tag     haproxy-metadata-proxy-8e927f01-795d-4fd1-bd00-bd898db487a3
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]:     user        root
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]:     group       root
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]:     maxconn     1024
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]:     pidfile     /var/lib/neutron/external/pids/8e927f01-795d-4fd1-bd00-bd898db487a3.pid.haproxy
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]:     daemon
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]: 
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]: defaults
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]:     log global
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]:     mode http
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]:     option httplog
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]:     option dontlognull
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]:     option http-server-close
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]:     option forwardfor
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]:     retries                 3
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]:     timeout http-request    30s
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]:     timeout connect         30s
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]:     timeout client          32s
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]:     timeout server          32s
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]:     timeout http-keep-alive 30s
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]: 
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]: 
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]: listen listener
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]:     bind 169.254.169.254:80
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]:     server metadata /var/lib/neutron/metadata_proxy
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]:     http-request add-header X-OVN-Network-ID 8e927f01-795d-4fd1-bd00-bd898db487a3
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 24 09:54:25 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.409 142336 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3', 'env', 'PROCESS_TAG=haproxy-8e927f01-795d-4fd1-bd00-bd898db487a3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8e927f01-795d-4fd1-bd00-bd898db487a3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 24 09:54:25 compute-1 nova_compute[230010]: 2025-11-24 09:54:25.639 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:25 compute-1 podman[234884]: 2025-11-24 09:54:25.771324477 +0000 UTC m=+0.051787937 container create 83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 24 09:54:25 compute-1 systemd[1]: Started libpod-conmon-83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1.scope.
Nov 24 09:54:25 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:54:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4be7663a828fa7a12df69277b0f29f124c5f09c02f4aa150357d874ae378fd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 09:54:25 compute-1 podman[234884]: 2025-11-24 09:54:25.745324761 +0000 UTC m=+0.025788251 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 09:54:25 compute-1 podman[234884]: 2025-11-24 09:54:25.842971658 +0000 UTC m=+0.123435138 container init 83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 09:54:25 compute-1 podman[234884]: 2025-11-24 09:54:25.849024536 +0000 UTC m=+0.129487996 container start 83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 09:54:25 compute-1 neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3[234900]: [NOTICE]   (234904) : New worker (234906) forked
Nov 24 09:54:25 compute-1 neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3[234900]: [NOTICE]   (234904) : Loading success.
Nov 24 09:54:25 compute-1 sudo[234915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:54:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:25.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:25 compute-1 sudo[234915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:54:25 compute-1 sudo[234915]: pam_unix(sudo:session): session closed for user root
Nov 24 09:54:26 compute-1 ceph-mon[80009]: pgmap v826: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 24 09:54:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:26.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:27.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:28 compute-1 ceph-mon[80009]: pgmap v827: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Nov 24 09:54:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:28.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:29 compute-1 podman[234942]: 2025-11-24 09:54:29.345352538 +0000 UTC m=+0.072919034 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 24 09:54:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:54:29 compute-1 nova_compute[230010]: 2025-11-24 09:54:29.775 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:29.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:54:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:54:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:54:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:30.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:30 compute-1 NetworkManager[48870]: <info>  [1763978070.6333] manager: (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27)
Nov 24 09:54:30 compute-1 NetworkManager[48870]: <info>  [1763978070.6341] device (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:54:30 compute-1 nova_compute[230010]: 2025-11-24 09:54:30.632 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:30 compute-1 NetworkManager[48870]: <info>  [1763978070.6352] manager: (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/28)
Nov 24 09:54:30 compute-1 NetworkManager[48870]: <info>  [1763978070.6356] device (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:54:30 compute-1 NetworkManager[48870]: <info>  [1763978070.6364] manager: (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Nov 24 09:54:30 compute-1 NetworkManager[48870]: <info>  [1763978070.6370] manager: (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Nov 24 09:54:30 compute-1 NetworkManager[48870]: <info>  [1763978070.6374] device (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 24 09:54:30 compute-1 NetworkManager[48870]: <info>  [1763978070.6377] device (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 24 09:54:30 compute-1 nova_compute[230010]: 2025-11-24 09:54:30.702 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:30 compute-1 nova_compute[230010]: 2025-11-24 09:54:30.703 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:30 compute-1 ovn_controller[132966]: 2025-11-24T09:54:30Z|00032|binding|INFO|Releasing lport a1fde06e-6df3-4ca6-8746-8510f661dd46 from this chassis (sb_readonly=0)
Nov 24 09:54:30 compute-1 nova_compute[230010]: 2025-11-24 09:54:30.712 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:31 compute-1 ceph-mon[80009]: pgmap v828: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Nov 24 09:54:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:31.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:32.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:33 compute-1 podman[234966]: 2025-11-24 09:54:33.328433029 +0000 UTC m=+0.071409957 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 24 09:54:33 compute-1 ceph-mon[80009]: pgmap v829: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 75 op/s
Nov 24 09:54:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:54:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:33.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:54:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:54:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:34.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:34 compute-1 nova_compute[230010]: 2025-11-24 09:54:34.778 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:35 compute-1 ovn_controller[132966]: 2025-11-24T09:54:35Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:69:46:4d 10.100.0.22
Nov 24 09:54:35 compute-1 ovn_controller[132966]: 2025-11-24T09:54:35Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:69:46:4d 10.100.0.22
Nov 24 09:54:35 compute-1 ceph-mon[80009]: pgmap v830: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.3 KiB/s wr, 65 op/s
Nov 24 09:54:35 compute-1 nova_compute[230010]: 2025-11-24 09:54:35.705 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:35.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:36.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:37 compute-1 ceph-mon[80009]: pgmap v831: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.3 KiB/s wr, 65 op/s
Nov 24 09:54:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:37.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:38.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:54:39 compute-1 ceph-mon[80009]: pgmap v832: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Nov 24 09:54:39 compute-1 nova_compute[230010]: 2025-11-24 09:54:39.782 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:39.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:40.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:40 compute-1 nova_compute[230010]: 2025-11-24 09:54:40.710 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:41 compute-1 podman[234997]: 2025-11-24 09:54:41.352637653 +0000 UTC m=+0.081062933 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 09:54:41 compute-1 ceph-mon[80009]: pgmap v833: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 304 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 24 09:54:41 compute-1 nova_compute[230010]: 2025-11-24 09:54:41.726 230014 DEBUG oslo_concurrency.lockutils [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:54:41 compute-1 nova_compute[230010]: 2025-11-24 09:54:41.727 230014 DEBUG oslo_concurrency.lockutils [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:54:41 compute-1 nova_compute[230010]: 2025-11-24 09:54:41.727 230014 DEBUG oslo_concurrency.lockutils [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:54:41 compute-1 nova_compute[230010]: 2025-11-24 09:54:41.727 230014 DEBUG oslo_concurrency.lockutils [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:54:41 compute-1 nova_compute[230010]: 2025-11-24 09:54:41.727 230014 DEBUG oslo_concurrency.lockutils [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:54:41 compute-1 nova_compute[230010]: 2025-11-24 09:54:41.729 230014 INFO nova.compute.manager [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Terminating instance
Nov 24 09:54:41 compute-1 nova_compute[230010]: 2025-11-24 09:54:41.729 230014 DEBUG nova.compute.manager [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 09:54:41 compute-1 kernel: tap31962c69-e8 (unregistering): left promiscuous mode
Nov 24 09:54:41 compute-1 NetworkManager[48870]: <info>  [1763978081.7862] device (tap31962c69-e8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 09:54:41 compute-1 ovn_controller[132966]: 2025-11-24T09:54:41Z|00033|binding|INFO|Releasing lport 31962c69-e86c-4431-b40a-e84cb6d9b71d from this chassis (sb_readonly=0)
Nov 24 09:54:41 compute-1 ovn_controller[132966]: 2025-11-24T09:54:41Z|00034|binding|INFO|Setting lport 31962c69-e86c-4431-b40a-e84cb6d9b71d down in Southbound
Nov 24 09:54:41 compute-1 nova_compute[230010]: 2025-11-24 09:54:41.787 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:41 compute-1 ovn_controller[132966]: 2025-11-24T09:54:41Z|00035|binding|INFO|Removing iface tap31962c69-e8 ovn-installed in OVS
Nov 24 09:54:41 compute-1 nova_compute[230010]: 2025-11-24 09:54:41.791 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:41 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:41.798 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:46:4d 10.100.0.22'], port_security=['fa:16:3e:69:46:4d 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': '4313a8bf-5a2a-4de5-84e7-ead18a049c18', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e927f01-795d-4fd1-bd00-bd898db487a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '841654bd-af9d-487b-9d46-e948edd0e4cf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12eb72db-6a1a-4bb9-9912-1e510973ae62, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=31962c69-e86c-4431-b40a-e84cb6d9b71d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:54:41 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:41.799 142336 INFO neutron.agent.ovn.metadata.agent [-] Port 31962c69-e86c-4431-b40a-e84cb6d9b71d in datapath 8e927f01-795d-4fd1-bd00-bd898db487a3 unbound from our chassis
Nov 24 09:54:41 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:41.800 142336 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e927f01-795d-4fd1-bd00-bd898db487a3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 09:54:41 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:41.801 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[030ea456-7b77-4012-90fd-cd7fd8b13010]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:41 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:41.802 142336 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3 namespace which is not needed anymore
Nov 24 09:54:41 compute-1 nova_compute[230010]: 2025-11-24 09:54:41.820 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:41 compute-1 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Deactivated successfully.
Nov 24 09:54:41 compute-1 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Consumed 14.456s CPU time.
Nov 24 09:54:41 compute-1 systemd-machined[193537]: Machine qemu-1-instance-00000002 terminated.
Nov 24 09:54:41 compute-1 neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3[234900]: [NOTICE]   (234904) : haproxy version is 2.8.14-c23fe91
Nov 24 09:54:41 compute-1 neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3[234900]: [NOTICE]   (234904) : path to executable is /usr/sbin/haproxy
Nov 24 09:54:41 compute-1 neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3[234900]: [WARNING]  (234904) : Exiting Master process...
Nov 24 09:54:41 compute-1 neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3[234900]: [ALERT]    (234904) : Current worker (234906) exited with code 143 (Terminated)
Nov 24 09:54:41 compute-1 neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3[234900]: [WARNING]  (234904) : All workers exited. Exiting... (0)
Nov 24 09:54:41 compute-1 systemd[1]: libpod-83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1.scope: Deactivated successfully.
Nov 24 09:54:41 compute-1 conmon[234900]: conmon 83dd7d5f66fd8162aac1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1.scope/container/memory.events
Nov 24 09:54:41 compute-1 podman[235043]: 2025-11-24 09:54:41.934045598 +0000 UTC m=+0.041129687 container died 83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 09:54:41 compute-1 nova_compute[230010]: 2025-11-24 09:54:41.951 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:41 compute-1 nova_compute[230010]: 2025-11-24 09:54:41.957 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:41 compute-1 nova_compute[230010]: 2025-11-24 09:54:41.961 230014 INFO nova.virt.libvirt.driver [-] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Instance destroyed successfully.
Nov 24 09:54:41 compute-1 nova_compute[230010]: 2025-11-24 09:54:41.963 230014 DEBUG nova.objects.instance [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'resources' on Instance uuid 4313a8bf-5a2a-4de5-84e7-ead18a049c18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:54:41 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1-userdata-shm.mount: Deactivated successfully.
Nov 24 09:54:41 compute-1 systemd[1]: var-lib-containers-storage-overlay-2f4be7663a828fa7a12df69277b0f29f124c5f09c02f4aa150357d874ae378fd-merged.mount: Deactivated successfully.
Nov 24 09:54:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:41.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:41 compute-1 podman[235043]: 2025-11-24 09:54:41.977534332 +0000 UTC m=+0.084618451 container cleanup 83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 24 09:54:41 compute-1 systemd[1]: libpod-conmon-83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1.scope: Deactivated successfully.
Nov 24 09:54:42 compute-1 podman[235082]: 2025-11-24 09:54:42.046459276 +0000 UTC m=+0.038721668 container remove 83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:54:42 compute-1 nova_compute[230010]: 2025-11-24 09:54:42.048 230014 DEBUG nova.compute.manager [req-9c5d89d1-2925-4249-a94b-c279b13219f2 req-7359e52f-689f-429d-993e-1bddde47ec61 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Received event network-vif-unplugged-31962c69-e86c-4431-b40a-e84cb6d9b71d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:54:42 compute-1 nova_compute[230010]: 2025-11-24 09:54:42.048 230014 DEBUG oslo_concurrency.lockutils [req-9c5d89d1-2925-4249-a94b-c279b13219f2 req-7359e52f-689f-429d-993e-1bddde47ec61 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:54:42 compute-1 nova_compute[230010]: 2025-11-24 09:54:42.049 230014 DEBUG oslo_concurrency.lockutils [req-9c5d89d1-2925-4249-a94b-c279b13219f2 req-7359e52f-689f-429d-993e-1bddde47ec61 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:54:42 compute-1 nova_compute[230010]: 2025-11-24 09:54:42.050 230014 DEBUG oslo_concurrency.lockutils [req-9c5d89d1-2925-4249-a94b-c279b13219f2 req-7359e52f-689f-429d-993e-1bddde47ec61 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:54:42 compute-1 nova_compute[230010]: 2025-11-24 09:54:42.050 230014 DEBUG nova.compute.manager [req-9c5d89d1-2925-4249-a94b-c279b13219f2 req-7359e52f-689f-429d-993e-1bddde47ec61 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] No waiting events found dispatching network-vif-unplugged-31962c69-e86c-4431-b40a-e84cb6d9b71d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 09:54:42 compute-1 nova_compute[230010]: 2025-11-24 09:54:42.050 230014 DEBUG nova.compute.manager [req-9c5d89d1-2925-4249-a94b-c279b13219f2 req-7359e52f-689f-429d-993e-1bddde47ec61 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Received event network-vif-unplugged-31962c69-e86c-4431-b40a-e84cb6d9b71d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 24 09:54:42 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:42.052 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[14cbce50-5db7-4f3d-8438-18300116bcb9]: (4, ('Mon Nov 24 09:54:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3 (83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1)\n83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1\nMon Nov 24 09:54:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3 (83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1)\n83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:42 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:42.054 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[11c4bef9-3c18-4b69-82a1-80a0d146cf5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:42 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:42.055 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e927f01-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:54:42 compute-1 nova_compute[230010]: 2025-11-24 09:54:42.056 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:42 compute-1 kernel: tap8e927f01-70: left promiscuous mode
Nov 24 09:54:42 compute-1 nova_compute[230010]: 2025-11-24 09:54:42.074 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:42 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:42.076 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[c3e444b7-cf65-45f9-89e9-164df4fd6e8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:42 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:42.092 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[de19768d-a505-4501-bdd3-feec01c8a68f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:42 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:42.094 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[79220a0e-548b-448e-af54-714f08ad2aa2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:42 compute-1 nova_compute[230010]: 2025-11-24 09:54:42.099 230014 DEBUG nova.virt.libvirt.vif [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:54:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-904956127',display_name='tempest-TestNetworkBasicOps-server-904956127',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-904956127',id=2,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBk6JkAuduqittiDGA4pBhSCzmrjSnKU2daRXm5XDAaZpUlbHNfHVDmOyWJWR78b4GrvBoMlHYEMPqcBJQA/sKOhpsOzfkRRFgAuDlkN09WkiLcyZB4s6iUYsG2XLZZzXw==',key_name='tempest-TestNetworkBasicOps-831372657',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:54:21Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-gukftcea',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:54:21Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=4313a8bf-5a2a-4de5-84e7-ead18a049c18,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "address": "fa:16:3e:69:46:4d", "network": {"id": "8e927f01-795d-4fd1-bd00-bd898db487a3", "bridge": "br-int", "label": "tempest-network-smoke--503305566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31962c69-e8", "ovs_interfaceid": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 09:54:42 compute-1 nova_compute[230010]: 2025-11-24 09:54:42.100 230014 DEBUG nova.network.os_vif_util [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "address": "fa:16:3e:69:46:4d", "network": {"id": "8e927f01-795d-4fd1-bd00-bd898db487a3", "bridge": "br-int", "label": "tempest-network-smoke--503305566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31962c69-e8", "ovs_interfaceid": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:54:42 compute-1 nova_compute[230010]: 2025-11-24 09:54:42.101 230014 DEBUG nova.network.os_vif_util [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:46:4d,bridge_name='br-int',has_traffic_filtering=True,id=31962c69-e86c-4431-b40a-e84cb6d9b71d,network=Network(8e927f01-795d-4fd1-bd00-bd898db487a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31962c69-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:54:42 compute-1 nova_compute[230010]: 2025-11-24 09:54:42.102 230014 DEBUG os_vif [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:46:4d,bridge_name='br-int',has_traffic_filtering=True,id=31962c69-e86c-4431-b40a-e84cb6d9b71d,network=Network(8e927f01-795d-4fd1-bd00-bd898db487a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31962c69-e8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 09:54:42 compute-1 nova_compute[230010]: 2025-11-24 09:54:42.106 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:42 compute-1 nova_compute[230010]: 2025-11-24 09:54:42.106 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31962c69-e8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:54:42 compute-1 nova_compute[230010]: 2025-11-24 09:54:42.108 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:42 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:42.109 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[b08c03d6-e07f-4768-9002-2e2d91b684e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400874, 'reachable_time': 43312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235100, 'error': None, 'target': 'ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:42 compute-1 nova_compute[230010]: 2025-11-24 09:54:42.111 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 09:54:42 compute-1 nova_compute[230010]: 2025-11-24 09:54:42.114 230014 INFO os_vif [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:46:4d,bridge_name='br-int',has_traffic_filtering=True,id=31962c69-e86c-4431-b40a-e84cb6d9b71d,network=Network(8e927f01-795d-4fd1-bd00-bd898db487a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31962c69-e8')
Nov 24 09:54:42 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:42.120 142476 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 24 09:54:42 compute-1 systemd[1]: run-netns-ovnmeta\x2d8e927f01\x2d795d\x2d4fd1\x2dbd00\x2dbd898db487a3.mount: Deactivated successfully.
Nov 24 09:54:42 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:54:42.121 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[df88bb95-887a-48ea-a440-9b60698c406e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:42 compute-1 nova_compute[230010]: 2025-11-24 09:54:42.518 230014 INFO nova.virt.libvirt.driver [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Deleting instance files /var/lib/nova/instances/4313a8bf-5a2a-4de5-84e7-ead18a049c18_del
Nov 24 09:54:42 compute-1 nova_compute[230010]: 2025-11-24 09:54:42.519 230014 INFO nova.virt.libvirt.driver [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Deletion of /var/lib/nova/instances/4313a8bf-5a2a-4de5-84e7-ead18a049c18_del complete
Nov 24 09:54:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:42.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:42 compute-1 nova_compute[230010]: 2025-11-24 09:54:42.606 230014 DEBUG nova.virt.libvirt.host [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Nov 24 09:54:42 compute-1 nova_compute[230010]: 2025-11-24 09:54:42.606 230014 INFO nova.virt.libvirt.host [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] UEFI support detected
Nov 24 09:54:42 compute-1 nova_compute[230010]: 2025-11-24 09:54:42.608 230014 INFO nova.compute.manager [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Took 0.88 seconds to destroy the instance on the hypervisor.
Nov 24 09:54:42 compute-1 nova_compute[230010]: 2025-11-24 09:54:42.608 230014 DEBUG oslo.service.loopingcall [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 09:54:42 compute-1 nova_compute[230010]: 2025-11-24 09:54:42.608 230014 DEBUG nova.compute.manager [-] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 09:54:42 compute-1 nova_compute[230010]: 2025-11-24 09:54:42.608 230014 DEBUG nova.network.neutron [-] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 09:54:43 compute-1 nova_compute[230010]: 2025-11-24 09:54:43.240 230014 DEBUG nova.network.neutron [-] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:54:43 compute-1 nova_compute[230010]: 2025-11-24 09:54:43.251 230014 INFO nova.compute.manager [-] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Took 0.64 seconds to deallocate network for instance.
Nov 24 09:54:43 compute-1 nova_compute[230010]: 2025-11-24 09:54:43.292 230014 DEBUG oslo_concurrency.lockutils [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:54:43 compute-1 nova_compute[230010]: 2025-11-24 09:54:43.292 230014 DEBUG oslo_concurrency.lockutils [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:54:43 compute-1 nova_compute[230010]: 2025-11-24 09:54:43.335 230014 DEBUG oslo_concurrency.processutils [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:54:43 compute-1 ceph-mon[80009]: pgmap v834: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Nov 24 09:54:43 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:54:43 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1476535774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:43 compute-1 nova_compute[230010]: 2025-11-24 09:54:43.789 230014 DEBUG oslo_concurrency.processutils [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:54:43 compute-1 nova_compute[230010]: 2025-11-24 09:54:43.795 230014 DEBUG nova.compute.provider_tree [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:54:43 compute-1 nova_compute[230010]: 2025-11-24 09:54:43.807 230014 DEBUG nova.scheduler.client.report [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:54:43 compute-1 nova_compute[230010]: 2025-11-24 09:54:43.824 230014 DEBUG oslo_concurrency.lockutils [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:54:43 compute-1 nova_compute[230010]: 2025-11-24 09:54:43.848 230014 INFO nova.scheduler.client.report [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Deleted allocations for instance 4313a8bf-5a2a-4de5-84e7-ead18a049c18
Nov 24 09:54:43 compute-1 nova_compute[230010]: 2025-11-24 09:54:43.903 230014 DEBUG oslo_concurrency.lockutils [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:54:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:43.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:44 compute-1 nova_compute[230010]: 2025-11-24 09:54:44.126 230014 DEBUG nova.compute.manager [req-06a12199-37fc-4dfc-99dd-0a4ebfca284f req-3c576571-d042-4490-8eed-c564b81d28d3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Received event network-vif-plugged-31962c69-e86c-4431-b40a-e84cb6d9b71d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:54:44 compute-1 nova_compute[230010]: 2025-11-24 09:54:44.127 230014 DEBUG oslo_concurrency.lockutils [req-06a12199-37fc-4dfc-99dd-0a4ebfca284f req-3c576571-d042-4490-8eed-c564b81d28d3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:54:44 compute-1 nova_compute[230010]: 2025-11-24 09:54:44.127 230014 DEBUG oslo_concurrency.lockutils [req-06a12199-37fc-4dfc-99dd-0a4ebfca284f req-3c576571-d042-4490-8eed-c564b81d28d3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:54:44 compute-1 nova_compute[230010]: 2025-11-24 09:54:44.127 230014 DEBUG oslo_concurrency.lockutils [req-06a12199-37fc-4dfc-99dd-0a4ebfca284f req-3c576571-d042-4490-8eed-c564b81d28d3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:54:44 compute-1 nova_compute[230010]: 2025-11-24 09:54:44.127 230014 DEBUG nova.compute.manager [req-06a12199-37fc-4dfc-99dd-0a4ebfca284f req-3c576571-d042-4490-8eed-c564b81d28d3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] No waiting events found dispatching network-vif-plugged-31962c69-e86c-4431-b40a-e84cb6d9b71d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 09:54:44 compute-1 nova_compute[230010]: 2025-11-24 09:54:44.128 230014 WARNING nova.compute.manager [req-06a12199-37fc-4dfc-99dd-0a4ebfca284f req-3c576571-d042-4490-8eed-c564b81d28d3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Received unexpected event network-vif-plugged-31962c69-e86c-4431-b40a-e84cb6d9b71d for instance with vm_state deleted and task_state None.
Nov 24 09:54:44 compute-1 nova_compute[230010]: 2025-11-24 09:54:44.128 230014 DEBUG nova.compute.manager [req-06a12199-37fc-4dfc-99dd-0a4ebfca284f req-3c576571-d042-4490-8eed-c564b81d28d3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Received event network-vif-deleted-31962c69-e86c-4431-b40a-e84cb6d9b71d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:54:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:54:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:44.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:44 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1476535774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:54:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:54:45 compute-1 ceph-mon[80009]: pgmap v835: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 304 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Nov 24 09:54:45 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:54:45 compute-1 nova_compute[230010]: 2025-11-24 09:54:45.712 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:45.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:46 compute-1 sudo[235145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:54:46 compute-1 sudo[235145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:54:46 compute-1 sudo[235145]: pam_unix(sudo:session): session closed for user root
Nov 24 09:54:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:46.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:46 compute-1 nova_compute[230010]: 2025-11-24 09:54:46.606 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:46 compute-1 nova_compute[230010]: 2025-11-24 09:54:46.704 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:47 compute-1 nova_compute[230010]: 2025-11-24 09:54:47.109 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:47 compute-1 ceph-mon[80009]: pgmap v836: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 304 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Nov 24 09:54:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:47.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:54:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:48.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:54:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:54:49 compute-1 ceph-mon[80009]: pgmap v837: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 324 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Nov 24 09:54:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:49.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:50 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:54:50 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2362967868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:50.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:50 compute-1 nova_compute[230010]: 2025-11-24 09:54:50.713 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:50 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2362967868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:51 compute-1 ceph-mon[80009]: pgmap v838: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 22 KiB/s wr, 30 op/s
Nov 24 09:54:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:51.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:52 compute-1 nova_compute[230010]: 2025-11-24 09:54:52.110 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:52.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:53 compute-1 ceph-mon[80009]: pgmap v839: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 24 KiB/s wr, 58 op/s
Nov 24 09:54:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:53.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:54:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:54.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:55 compute-1 nova_compute[230010]: 2025-11-24 09:54:55.716 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:55 compute-1 ceph-mon[80009]: pgmap v840: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 3.3 KiB/s wr, 56 op/s
Nov 24 09:54:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:55.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:56.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:56 compute-1 nova_compute[230010]: 2025-11-24 09:54:56.960 230014 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763978081.958483, 4313a8bf-5a2a-4de5-84e7-ead18a049c18 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:54:56 compute-1 nova_compute[230010]: 2025-11-24 09:54:56.960 230014 INFO nova.compute.manager [-] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] VM Stopped (Lifecycle Event)
Nov 24 09:54:56 compute-1 nova_compute[230010]: 2025-11-24 09:54:56.977 230014 DEBUG nova.compute.manager [None req-de1e3bcb-8ad7-4049-aa46-0de17f47301f - - - - - -] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:54:57 compute-1 nova_compute[230010]: 2025-11-24 09:54:57.112 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:57 compute-1 sudo[235177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:54:57 compute-1 sudo[235177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:54:57 compute-1 sudo[235177]: pam_unix(sudo:session): session closed for user root
Nov 24 09:54:57 compute-1 sudo[235202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:54:57 compute-1 sudo[235202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:54:57 compute-1 ceph-mon[80009]: pgmap v841: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 3.3 KiB/s wr, 56 op/s
Nov 24 09:54:57 compute-1 sudo[235202]: pam_unix(sudo:session): session closed for user root
Nov 24 09:54:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:57.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:58 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:54:58 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:54:58 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:54:58 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:54:58 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:54:58 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:54:58 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:54:58 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:54:58 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:54:58 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:54:58 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:54:58 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:54:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:58.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:58 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:54:58 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:54:58 compute-1 ceph-mon[80009]: pgmap v842: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 3.4 KiB/s wr, 58 op/s
Nov 24 09:54:58 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:54:58 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:54:58 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:54:58 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:54:58 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:54:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:54:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:54:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:59.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:00 compute-1 podman[235260]: 2025-11-24 09:55:00.320163686 +0000 UTC m=+0.058013068 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 24 09:55:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:55:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:55:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:00.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:00 compute-1 nova_compute[230010]: 2025-11-24 09:55:00.717 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:01 compute-1 ceph-mon[80009]: pgmap v843: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 24 09:55:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:55:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/1926278703' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:55:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/1926278703' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:55:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:01.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:02 compute-1 nova_compute[230010]: 2025-11-24 09:55:02.113 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:55:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:55:02 compute-1 sudo[235281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:55:02 compute-1 sudo[235281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:55:02 compute-1 sudo[235281]: pam_unix(sudo:session): session closed for user root
Nov 24 09:55:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:02.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:03 compute-1 ceph-mon[80009]: pgmap v844: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Nov 24 09:55:03 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:55:03 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:55:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:03.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:04 compute-1 podman[235307]: 2025-11-24 09:55:04.337209679 +0000 UTC m=+0.076178213 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 24 09:55:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:55:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:04.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:05 compute-1 ceph-mon[80009]: pgmap v845: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 09:55:05 compute-1 nova_compute[230010]: 2025-11-24 09:55:05.720 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:05.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:06 compute-1 sudo[235335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:55:06 compute-1 sudo[235335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:55:06 compute-1 sudo[235335]: pam_unix(sudo:session): session closed for user root
Nov 24 09:55:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:06.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:07 compute-1 nova_compute[230010]: 2025-11-24 09:55:07.115 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:07 compute-1 ceph-mon[80009]: pgmap v846: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 09:55:07 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:07.724 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:55:07 compute-1 nova_compute[230010]: 2025-11-24 09:55:07.725 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:07 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:07.726 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 09:55:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:07.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:08.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:55:09 compute-1 ceph-mon[80009]: pgmap v847: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:55:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:55:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:10.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:55:10 compute-1 ceph-mon[80009]: pgmap v848: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:55:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:10.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:10 compute-1 nova_compute[230010]: 2025-11-24 09:55:10.722 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.002000048s ======
Nov 24 09:55:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:12.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Nov 24 09:55:12 compute-1 podman[235363]: 2025-11-24 09:55:12.088230333 +0000 UTC m=+0.051536060 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 24 09:55:12 compute-1 nova_compute[230010]: 2025-11-24 09:55:12.117 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:12.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:12 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:12.728 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:55:13 compute-1 ceph-mon[80009]: pgmap v849: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:55:13 compute-1 nova_compute[230010]: 2025-11-24 09:55:13.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:55:13 compute-1 nova_compute[230010]: 2025-11-24 09:55:13.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:55:13 compute-1 nova_compute[230010]: 2025-11-24 09:55:13.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:55:13 compute-1 nova_compute[230010]: 2025-11-24 09:55:13.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 09:55:13 compute-1 nova_compute[230010]: 2025-11-24 09:55:13.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:55:13 compute-1 nova_compute[230010]: 2025-11-24 09:55:13.791 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:55:13 compute-1 nova_compute[230010]: 2025-11-24 09:55:13.791 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:55:13 compute-1 nova_compute[230010]: 2025-11-24 09:55:13.791 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:55:13 compute-1 nova_compute[230010]: 2025-11-24 09:55:13.791 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:55:13 compute-1 nova_compute[230010]: 2025-11-24 09:55:13.792 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:55:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:14.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:55:14 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/69888656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:55:14 compute-1 nova_compute[230010]: 2025-11-24 09:55:14.229 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:55:14 compute-1 nova_compute[230010]: 2025-11-24 09:55:14.373 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:55:14 compute-1 nova_compute[230010]: 2025-11-24 09:55:14.375 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5047MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:55:14 compute-1 nova_compute[230010]: 2025-11-24 09:55:14.375 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:55:14 compute-1 nova_compute[230010]: 2025-11-24 09:55:14.375 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:55:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:55:14 compute-1 nova_compute[230010]: 2025-11-24 09:55:14.442 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:55:14 compute-1 nova_compute[230010]: 2025-11-24 09:55:14.442 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:55:14 compute-1 nova_compute[230010]: 2025-11-24 09:55:14.460 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:55:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:14.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:55:14 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1018635266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:55:14 compute-1 nova_compute[230010]: 2025-11-24 09:55:14.876 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:55:14 compute-1 nova_compute[230010]: 2025-11-24 09:55:14.882 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:55:14 compute-1 nova_compute[230010]: 2025-11-24 09:55:14.897 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:55:14 compute-1 nova_compute[230010]: 2025-11-24 09:55:14.936 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:55:14 compute-1 nova_compute[230010]: 2025-11-24 09:55:14.936 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:55:15 compute-1 ceph-mon[80009]: pgmap v850: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:55:15 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/69888656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:55:15 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1018635266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:55:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:55:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:55:15 compute-1 nova_compute[230010]: 2025-11-24 09:55:15.722 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:15 compute-1 nova_compute[230010]: 2025-11-24 09:55:15.937 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:55:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:16.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:55:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:16.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:16 compute-1 nova_compute[230010]: 2025-11-24 09:55:16.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:55:16 compute-1 nova_compute[230010]: 2025-11-24 09:55:16.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 09:55:16 compute-1 nova_compute[230010]: 2025-11-24 09:55:16.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 09:55:16 compute-1 nova_compute[230010]: 2025-11-24 09:55:16.780 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 09:55:16 compute-1 nova_compute[230010]: 2025-11-24 09:55:16.781 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:55:17 compute-1 nova_compute[230010]: 2025-11-24 09:55:17.119 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:17 compute-1 ceph-mon[80009]: pgmap v851: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:55:17 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2401010604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:55:17 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2373378575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:55:17 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2838487968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:55:17 compute-1 nova_compute[230010]: 2025-11-24 09:55:17.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:55:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:18.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:18 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/4207547769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:55:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:18.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:18 compute-1 nova_compute[230010]: 2025-11-24 09:55:18.759 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:55:18 compute-1 nova_compute[230010]: 2025-11-24 09:55:18.760 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:55:19 compute-1 ceph-mon[80009]: pgmap v852: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:55:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:55:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:20.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:20.056 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:55:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:20.056 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:55:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:20.056 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:55:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:20.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:20 compute-1 nova_compute[230010]: 2025-11-24 09:55:20.724 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:21 compute-1 nova_compute[230010]: 2025-11-24 09:55:21.207 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "8e009e75-a97b-4c5d-a470-5db1137cb407" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:55:21 compute-1 nova_compute[230010]: 2025-11-24 09:55:21.208 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:55:21 compute-1 nova_compute[230010]: 2025-11-24 09:55:21.225 230014 DEBUG nova.compute.manager [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 09:55:21 compute-1 ceph-mon[80009]: pgmap v853: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:55:21 compute-1 nova_compute[230010]: 2025-11-24 09:55:21.292 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:55:21 compute-1 nova_compute[230010]: 2025-11-24 09:55:21.292 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:55:21 compute-1 nova_compute[230010]: 2025-11-24 09:55:21.299 230014 DEBUG nova.virt.hardware [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 09:55:21 compute-1 nova_compute[230010]: 2025-11-24 09:55:21.299 230014 INFO nova.compute.claims [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Claim successful on node compute-1.ctlplane.example.com
Nov 24 09:55:21 compute-1 nova_compute[230010]: 2025-11-24 09:55:21.378 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:55:21 compute-1 nova_compute[230010]: 2025-11-24 09:55:21.798 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:55:21 compute-1 nova_compute[230010]: 2025-11-24 09:55:21.806 230014 DEBUG nova.compute.provider_tree [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:55:21 compute-1 nova_compute[230010]: 2025-11-24 09:55:21.826 230014 DEBUG nova.scheduler.client.report [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:55:21 compute-1 nova_compute[230010]: 2025-11-24 09:55:21.845 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:55:21 compute-1 nova_compute[230010]: 2025-11-24 09:55:21.847 230014 DEBUG nova.compute.manager [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 09:55:21 compute-1 nova_compute[230010]: 2025-11-24 09:55:21.892 230014 DEBUG nova.compute.manager [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 09:55:21 compute-1 nova_compute[230010]: 2025-11-24 09:55:21.893 230014 DEBUG nova.network.neutron [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 09:55:21 compute-1 nova_compute[230010]: 2025-11-24 09:55:21.911 230014 INFO nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 09:55:21 compute-1 nova_compute[230010]: 2025-11-24 09:55:21.926 230014 DEBUG nova.compute.manager [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 09:55:22 compute-1 nova_compute[230010]: 2025-11-24 09:55:22.014 230014 DEBUG nova.compute.manager [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 09:55:22 compute-1 nova_compute[230010]: 2025-11-24 09:55:22.016 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 09:55:22 compute-1 nova_compute[230010]: 2025-11-24 09:55:22.018 230014 INFO nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Creating image(s)
Nov 24 09:55:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.004000095s ======
Nov 24 09:55:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:22.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000095s
Nov 24 09:55:22 compute-1 nova_compute[230010]: 2025-11-24 09:55:22.044 230014 DEBUG nova.storage.rbd_utils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 8e009e75-a97b-4c5d-a470-5db1137cb407_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:55:22 compute-1 nova_compute[230010]: 2025-11-24 09:55:22.067 230014 DEBUG nova.storage.rbd_utils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 8e009e75-a97b-4c5d-a470-5db1137cb407_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:55:22 compute-1 nova_compute[230010]: 2025-11-24 09:55:22.087 230014 DEBUG nova.storage.rbd_utils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 8e009e75-a97b-4c5d-a470-5db1137cb407_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:55:22 compute-1 nova_compute[230010]: 2025-11-24 09:55:22.090 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:55:22 compute-1 nova_compute[230010]: 2025-11-24 09:55:22.121 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:22 compute-1 nova_compute[230010]: 2025-11-24 09:55:22.144 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:55:22 compute-1 nova_compute[230010]: 2025-11-24 09:55:22.145 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "2ed5c667523487159c4c4503c82babbc95dbae40" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:55:22 compute-1 nova_compute[230010]: 2025-11-24 09:55:22.146 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:55:22 compute-1 nova_compute[230010]: 2025-11-24 09:55:22.146 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:55:22 compute-1 nova_compute[230010]: 2025-11-24 09:55:22.172 230014 DEBUG nova.storage.rbd_utils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 8e009e75-a97b-4c5d-a470-5db1137cb407_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:55:22 compute-1 nova_compute[230010]: 2025-11-24 09:55:22.176 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 8e009e75-a97b-4c5d-a470-5db1137cb407_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:55:22 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3417310250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:55:22 compute-1 nova_compute[230010]: 2025-11-24 09:55:22.276 230014 DEBUG nova.policy [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '43f79ff3105e4372a3c095e8057d4f1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '94d069fc040647d5a6e54894eec915fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 09:55:22 compute-1 nova_compute[230010]: 2025-11-24 09:55:22.416 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 8e009e75-a97b-4c5d-a470-5db1137cb407_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:55:22 compute-1 nova_compute[230010]: 2025-11-24 09:55:22.487 230014 DEBUG nova.storage.rbd_utils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] resizing rbd image 8e009e75-a97b-4c5d-a470-5db1137cb407_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 24 09:55:22 compute-1 nova_compute[230010]: 2025-11-24 09:55:22.572 230014 DEBUG nova.objects.instance [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'migration_context' on Instance uuid 8e009e75-a97b-4c5d-a470-5db1137cb407 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:55:22 compute-1 nova_compute[230010]: 2025-11-24 09:55:22.584 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 09:55:22 compute-1 nova_compute[230010]: 2025-11-24 09:55:22.584 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Ensure instance console log exists: /var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 09:55:22 compute-1 nova_compute[230010]: 2025-11-24 09:55:22.584 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:55:22 compute-1 nova_compute[230010]: 2025-11-24 09:55:22.585 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:55:22 compute-1 nova_compute[230010]: 2025-11-24 09:55:22.585 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:55:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:22.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:22 compute-1 nova_compute[230010]: 2025-11-24 09:55:22.885 230014 DEBUG nova.network.neutron [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Successfully created port: e962e27f-80bf-4103-98ae-d8af84c6fc28 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 24 09:55:23 compute-1 ceph-mon[80009]: pgmap v854: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:55:23 compute-1 nova_compute[230010]: 2025-11-24 09:55:23.735 230014 DEBUG nova.network.neutron [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Successfully updated port: e962e27f-80bf-4103-98ae-d8af84c6fc28 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 09:55:23 compute-1 nova_compute[230010]: 2025-11-24 09:55:23.751 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:55:23 compute-1 nova_compute[230010]: 2025-11-24 09:55:23.751 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:55:23 compute-1 nova_compute[230010]: 2025-11-24 09:55:23.751 230014 DEBUG nova.network.neutron [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 09:55:23 compute-1 nova_compute[230010]: 2025-11-24 09:55:23.825 230014 DEBUG nova.compute.manager [req-d343bef7-8df3-4da0-bffb-87574abdc6e6 req-7f09d447-5261-48b3-99dd-53431fd84f40 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-changed-e962e27f-80bf-4103-98ae-d8af84c6fc28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:55:23 compute-1 nova_compute[230010]: 2025-11-24 09:55:23.825 230014 DEBUG nova.compute.manager [req-d343bef7-8df3-4da0-bffb-87574abdc6e6 req-7f09d447-5261-48b3-99dd-53431fd84f40 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Refreshing instance network info cache due to event network-changed-e962e27f-80bf-4103-98ae-d8af84c6fc28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 09:55:23 compute-1 nova_compute[230010]: 2025-11-24 09:55:23.826 230014 DEBUG oslo_concurrency.lockutils [req-d343bef7-8df3-4da0-bffb-87574abdc6e6 req-7f09d447-5261-48b3-99dd-53431fd84f40 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:55:23 compute-1 nova_compute[230010]: 2025-11-24 09:55:23.882 230014 DEBUG nova.network.neutron [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 09:55:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:24.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:55:24 compute-1 nova_compute[230010]: 2025-11-24 09:55:24.603 230014 DEBUG nova.network.neutron [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updating instance_info_cache with network_info: [{"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:55:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:24.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:24 compute-1 nova_compute[230010]: 2025-11-24 09:55:24.621 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:55:24 compute-1 nova_compute[230010]: 2025-11-24 09:55:24.622 230014 DEBUG nova.compute.manager [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Instance network_info: |[{"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 09:55:24 compute-1 nova_compute[230010]: 2025-11-24 09:55:24.622 230014 DEBUG oslo_concurrency.lockutils [req-d343bef7-8df3-4da0-bffb-87574abdc6e6 req-7f09d447-5261-48b3-99dd-53431fd84f40 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:55:24 compute-1 nova_compute[230010]: 2025-11-24 09:55:24.622 230014 DEBUG nova.network.neutron [req-d343bef7-8df3-4da0-bffb-87574abdc6e6 req-7f09d447-5261-48b3-99dd-53431fd84f40 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Refreshing network info cache for port e962e27f-80bf-4103-98ae-d8af84c6fc28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 09:55:24 compute-1 nova_compute[230010]: 2025-11-24 09:55:24.625 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Start _get_guest_xml network_info=[{"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '6ef14bdf-4f04-4400-8040-4409d9d5271e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 09:55:24 compute-1 nova_compute[230010]: 2025-11-24 09:55:24.629 230014 WARNING nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:55:24 compute-1 nova_compute[230010]: 2025-11-24 09:55:24.634 230014 DEBUG nova.virt.libvirt.host [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 09:55:24 compute-1 nova_compute[230010]: 2025-11-24 09:55:24.635 230014 DEBUG nova.virt.libvirt.host [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 09:55:24 compute-1 nova_compute[230010]: 2025-11-24 09:55:24.643 230014 DEBUG nova.virt.libvirt.host [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 09:55:24 compute-1 nova_compute[230010]: 2025-11-24 09:55:24.644 230014 DEBUG nova.virt.libvirt.host [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 09:55:24 compute-1 nova_compute[230010]: 2025-11-24 09:55:24.644 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 09:55:24 compute-1 nova_compute[230010]: 2025-11-24 09:55:24.645 230014 DEBUG nova.virt.hardware [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T09:52:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='4a5d03ad-925b-45f1-89bd-f1325f9f3292',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 09:55:24 compute-1 nova_compute[230010]: 2025-11-24 09:55:24.645 230014 DEBUG nova.virt.hardware [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 09:55:24 compute-1 nova_compute[230010]: 2025-11-24 09:55:24.646 230014 DEBUG nova.virt.hardware [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 09:55:24 compute-1 nova_compute[230010]: 2025-11-24 09:55:24.646 230014 DEBUG nova.virt.hardware [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 09:55:24 compute-1 nova_compute[230010]: 2025-11-24 09:55:24.646 230014 DEBUG nova.virt.hardware [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 09:55:24 compute-1 nova_compute[230010]: 2025-11-24 09:55:24.647 230014 DEBUG nova.virt.hardware [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 09:55:24 compute-1 nova_compute[230010]: 2025-11-24 09:55:24.647 230014 DEBUG nova.virt.hardware [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 09:55:24 compute-1 nova_compute[230010]: 2025-11-24 09:55:24.647 230014 DEBUG nova.virt.hardware [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 09:55:24 compute-1 nova_compute[230010]: 2025-11-24 09:55:24.648 230014 DEBUG nova.virt.hardware [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 09:55:24 compute-1 nova_compute[230010]: 2025-11-24 09:55:24.648 230014 DEBUG nova.virt.hardware [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 09:55:24 compute-1 nova_compute[230010]: 2025-11-24 09:55:24.648 230014 DEBUG nova.virt.hardware [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 09:55:24 compute-1 nova_compute[230010]: 2025-11-24 09:55:24.652 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:55:25 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 09:55:25 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3838070251' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.174 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.204 230014 DEBUG nova.storage.rbd_utils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 8e009e75-a97b-4c5d-a470-5db1137cb407_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.208 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:55:25 compute-1 ceph-mon[80009]: pgmap v855: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:55:25 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3838070251' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:55:25 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 09:55:25 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/861026441' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.659 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.661 230014 DEBUG nova.virt.libvirt.vif [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T09:55:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1469091475',display_name='tempest-TestNetworkBasicOps-server-1469091475',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1469091475',id=3,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZZQMjyTSwgkfidZfPhBPgBcV62YzWExHHXsl1BnsLfJjAX1c531QA8puLkgpD93eEa7lPae/Gh1kFnVkWZAW6FTPgZg7BzeD7RovkQcC7HReAVJUg962qa1kvY0rkgvg==',key_name='tempest-TestNetworkBasicOps-853741544',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-mfq37y5p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T09:55:21Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=8e009e75-a97b-4c5d-a470-5db1137cb407,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.661 230014 DEBUG nova.network.os_vif_util [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.662 230014 DEBUG nova.network.os_vif_util [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:f1:71,bridge_name='br-int',has_traffic_filtering=True,id=e962e27f-80bf-4103-98ae-d8af84c6fc28,network=Network(636fec29-e18e-45f1-aabc-369f5fd0d593),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape962e27f-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.663 230014 DEBUG nova.objects.instance [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e009e75-a97b-4c5d-a470-5db1137cb407 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.679 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] End _get_guest_xml xml=<domain type="kvm">
Nov 24 09:55:25 compute-1 nova_compute[230010]:   <uuid>8e009e75-a97b-4c5d-a470-5db1137cb407</uuid>
Nov 24 09:55:25 compute-1 nova_compute[230010]:   <name>instance-00000003</name>
Nov 24 09:55:25 compute-1 nova_compute[230010]:   <memory>131072</memory>
Nov 24 09:55:25 compute-1 nova_compute[230010]:   <vcpu>1</vcpu>
Nov 24 09:55:25 compute-1 nova_compute[230010]:   <metadata>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <nova:name>tempest-TestNetworkBasicOps-server-1469091475</nova:name>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <nova:creationTime>2025-11-24 09:55:24</nova:creationTime>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <nova:flavor name="m1.nano">
Nov 24 09:55:25 compute-1 nova_compute[230010]:         <nova:memory>128</nova:memory>
Nov 24 09:55:25 compute-1 nova_compute[230010]:         <nova:disk>1</nova:disk>
Nov 24 09:55:25 compute-1 nova_compute[230010]:         <nova:swap>0</nova:swap>
Nov 24 09:55:25 compute-1 nova_compute[230010]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 09:55:25 compute-1 nova_compute[230010]:         <nova:vcpus>1</nova:vcpus>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       </nova:flavor>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <nova:owner>
Nov 24 09:55:25 compute-1 nova_compute[230010]:         <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 09:55:25 compute-1 nova_compute[230010]:         <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       </nova:owner>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <nova:ports>
Nov 24 09:55:25 compute-1 nova_compute[230010]:         <nova:port uuid="e962e27f-80bf-4103-98ae-d8af84c6fc28">
Nov 24 09:55:25 compute-1 nova_compute[230010]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:         </nova:port>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       </nova:ports>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     </nova:instance>
Nov 24 09:55:25 compute-1 nova_compute[230010]:   </metadata>
Nov 24 09:55:25 compute-1 nova_compute[230010]:   <sysinfo type="smbios">
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <system>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <entry name="manufacturer">RDO</entry>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <entry name="product">OpenStack Compute</entry>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <entry name="serial">8e009e75-a97b-4c5d-a470-5db1137cb407</entry>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <entry name="uuid">8e009e75-a97b-4c5d-a470-5db1137cb407</entry>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <entry name="family">Virtual Machine</entry>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     </system>
Nov 24 09:55:25 compute-1 nova_compute[230010]:   </sysinfo>
Nov 24 09:55:25 compute-1 nova_compute[230010]:   <os>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <boot dev="hd"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <smbios mode="sysinfo"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:   </os>
Nov 24 09:55:25 compute-1 nova_compute[230010]:   <features>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <acpi/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <apic/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <vmcoreinfo/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:   </features>
Nov 24 09:55:25 compute-1 nova_compute[230010]:   <clock offset="utc">
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <timer name="hpet" present="no"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:   </clock>
Nov 24 09:55:25 compute-1 nova_compute[230010]:   <cpu mode="host-model" match="exact">
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:   </cpu>
Nov 24 09:55:25 compute-1 nova_compute[230010]:   <devices>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <disk type="network" device="disk">
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <driver type="raw" cache="none"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <source protocol="rbd" name="vms/8e009e75-a97b-4c5d-a470-5db1137cb407_disk">
Nov 24 09:55:25 compute-1 nova_compute[230010]:         <host name="192.168.122.100" port="6789"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:         <host name="192.168.122.102" port="6789"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:         <host name="192.168.122.101" port="6789"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       </source>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <auth username="openstack">
Nov 24 09:55:25 compute-1 nova_compute[230010]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       </auth>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <target dev="vda" bus="virtio"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     </disk>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <disk type="network" device="cdrom">
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <driver type="raw" cache="none"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <source protocol="rbd" name="vms/8e009e75-a97b-4c5d-a470-5db1137cb407_disk.config">
Nov 24 09:55:25 compute-1 nova_compute[230010]:         <host name="192.168.122.100" port="6789"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:         <host name="192.168.122.102" port="6789"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:         <host name="192.168.122.101" port="6789"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       </source>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <auth username="openstack">
Nov 24 09:55:25 compute-1 nova_compute[230010]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       </auth>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <target dev="sda" bus="sata"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     </disk>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <interface type="ethernet">
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <mac address="fa:16:3e:a4:f1:71"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <model type="virtio"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <mtu size="1442"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <target dev="tape962e27f-80"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     </interface>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <serial type="pty">
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <log file="/var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/console.log" append="off"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     </serial>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <video>
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <model type="virtio"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     </video>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <input type="tablet" bus="usb"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <rng model="virtio">
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <backend model="random">/dev/urandom</backend>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     </rng>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <controller type="usb" index="0"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     <memballoon model="virtio">
Nov 24 09:55:25 compute-1 nova_compute[230010]:       <stats period="10"/>
Nov 24 09:55:25 compute-1 nova_compute[230010]:     </memballoon>
Nov 24 09:55:25 compute-1 nova_compute[230010]:   </devices>
Nov 24 09:55:25 compute-1 nova_compute[230010]: </domain>
Nov 24 09:55:25 compute-1 nova_compute[230010]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.680 230014 DEBUG nova.compute.manager [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Preparing to wait for external event network-vif-plugged-e962e27f-80bf-4103-98ae-d8af84c6fc28 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.680 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.681 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.681 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.681 230014 DEBUG nova.virt.libvirt.vif [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T09:55:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1469091475',display_name='tempest-TestNetworkBasicOps-server-1469091475',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1469091475',id=3,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZZQMjyTSwgkfidZfPhBPgBcV62YzWExHHXsl1BnsLfJjAX1c531QA8puLkgpD93eEa7lPae/Gh1kFnVkWZAW6FTPgZg7BzeD7RovkQcC7HReAVJUg962qa1kvY0rkgvg==',key_name='tempest-TestNetworkBasicOps-853741544',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-mfq37y5p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T09:55:21Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=8e009e75-a97b-4c5d-a470-5db1137cb407,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.682 230014 DEBUG nova.network.os_vif_util [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.682 230014 DEBUG nova.network.os_vif_util [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:f1:71,bridge_name='br-int',has_traffic_filtering=True,id=e962e27f-80bf-4103-98ae-d8af84c6fc28,network=Network(636fec29-e18e-45f1-aabc-369f5fd0d593),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape962e27f-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.683 230014 DEBUG os_vif [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:f1:71,bridge_name='br-int',has_traffic_filtering=True,id=e962e27f-80bf-4103-98ae-d8af84c6fc28,network=Network(636fec29-e18e-45f1-aabc-369f5fd0d593),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape962e27f-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.683 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.683 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.684 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.687 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.687 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape962e27f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.688 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape962e27f-80, col_values=(('external_ids', {'iface-id': 'e962e27f-80bf-4103-98ae-d8af84c6fc28', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:f1:71', 'vm-uuid': '8e009e75-a97b-4c5d-a470-5db1137cb407'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:55:25 compute-1 NetworkManager[48870]: <info>  [1763978125.7128] manager: (tape962e27f-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.712 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.716 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.721 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.722 230014 INFO os_vif [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:f1:71,bridge_name='br-int',has_traffic_filtering=True,id=e962e27f-80bf-4103-98ae-d8af84c6fc28,network=Network(636fec29-e18e-45f1-aabc-369f5fd0d593),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape962e27f-80')
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.724 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.760 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.761 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.761 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:a4:f1:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.761 230014 INFO nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Using config drive
Nov 24 09:55:25 compute-1 nova_compute[230010]: 2025-11-24 09:55:25.788 230014 DEBUG nova.storage.rbd_utils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 8e009e75-a97b-4c5d-a470-5db1137cb407_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:55:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:26.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:26 compute-1 nova_compute[230010]: 2025-11-24 09:55:26.081 230014 DEBUG nova.network.neutron [req-d343bef7-8df3-4da0-bffb-87574abdc6e6 req-7f09d447-5261-48b3-99dd-53431fd84f40 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updated VIF entry in instance network info cache for port e962e27f-80bf-4103-98ae-d8af84c6fc28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 09:55:26 compute-1 nova_compute[230010]: 2025-11-24 09:55:26.082 230014 DEBUG nova.network.neutron [req-d343bef7-8df3-4da0-bffb-87574abdc6e6 req-7f09d447-5261-48b3-99dd-53431fd84f40 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updating instance_info_cache with network_info: [{"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:55:26 compute-1 nova_compute[230010]: 2025-11-24 09:55:26.098 230014 DEBUG oslo_concurrency.lockutils [req-d343bef7-8df3-4da0-bffb-87574abdc6e6 req-7f09d447-5261-48b3-99dd-53431fd84f40 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:55:26 compute-1 nova_compute[230010]: 2025-11-24 09:55:26.262 230014 INFO nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Creating config drive at /var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/disk.config
Nov 24 09:55:26 compute-1 nova_compute[230010]: 2025-11-24 09:55:26.267 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8c178wmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:55:26 compute-1 sudo[235704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:55:26 compute-1 sudo[235704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:55:26 compute-1 sudo[235704]: pam_unix(sudo:session): session closed for user root
Nov 24 09:55:26 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/861026441' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:55:26 compute-1 nova_compute[230010]: 2025-11-24 09:55:26.395 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8c178wmp" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:55:26 compute-1 nova_compute[230010]: 2025-11-24 09:55:26.429 230014 DEBUG nova.storage.rbd_utils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 8e009e75-a97b-4c5d-a470-5db1137cb407_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:55:26 compute-1 nova_compute[230010]: 2025-11-24 09:55:26.433 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/disk.config 8e009e75-a97b-4c5d-a470-5db1137cb407_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:55:26 compute-1 nova_compute[230010]: 2025-11-24 09:55:26.597 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/disk.config 8e009e75-a97b-4c5d-a470-5db1137cb407_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:55:26 compute-1 nova_compute[230010]: 2025-11-24 09:55:26.598 230014 INFO nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Deleting local config drive /var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/disk.config because it was imported into RBD.
Nov 24 09:55:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:26.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:26 compute-1 kernel: tape962e27f-80: entered promiscuous mode
Nov 24 09:55:26 compute-1 NetworkManager[48870]: <info>  [1763978126.6636] manager: (tape962e27f-80): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Nov 24 09:55:26 compute-1 nova_compute[230010]: 2025-11-24 09:55:26.666 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:26 compute-1 ovn_controller[132966]: 2025-11-24T09:55:26Z|00036|binding|INFO|Claiming lport e962e27f-80bf-4103-98ae-d8af84c6fc28 for this chassis.
Nov 24 09:55:26 compute-1 ovn_controller[132966]: 2025-11-24T09:55:26Z|00037|binding|INFO|e962e27f-80bf-4103-98ae-d8af84c6fc28: Claiming fa:16:3e:a4:f1:71 10.100.0.6
Nov 24 09:55:26 compute-1 nova_compute[230010]: 2025-11-24 09:55:26.672 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:26 compute-1 nova_compute[230010]: 2025-11-24 09:55:26.674 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:26 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.684 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:f1:71 10.100.0.6'], port_security=['fa:16:3e:a4:f1:71 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8e009e75-a97b-4c5d-a470-5db1137cb407', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-636fec29-e18e-45f1-aabc-369f5fd0d593', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '08677d44-dac1-4cc6-ac2a-f951a8415b1a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cab95497-29d8-4481-acd1-a71d08bb0310, chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=e962e27f-80bf-4103-98ae-d8af84c6fc28) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:55:26 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.685 142336 INFO neutron.agent.ovn.metadata.agent [-] Port e962e27f-80bf-4103-98ae-d8af84c6fc28 in datapath 636fec29-e18e-45f1-aabc-369f5fd0d593 bound to our chassis
Nov 24 09:55:26 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.686 142336 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 636fec29-e18e-45f1-aabc-369f5fd0d593
Nov 24 09:55:26 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.695 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[57f931bd-64f4-425e-a6bf-4c020a995d12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:55:26 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.696 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap636fec29-e1 in ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 24 09:55:26 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.697 234803 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap636fec29-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 24 09:55:26 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.697 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[0f11a6da-0ed3-41a2-aaab-2f270d30c40a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:55:26 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.699 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[3e322adc-7e8a-447a-9cf7-651ace0d462a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:55:26 compute-1 systemd-udevd[235782]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 09:55:26 compute-1 systemd-machined[193537]: New machine qemu-2-instance-00000003.
Nov 24 09:55:26 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.712 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[fb273dea-7856-4ca7-9692-a649ba066d84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:55:26 compute-1 NetworkManager[48870]: <info>  [1763978126.7197] device (tape962e27f-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 09:55:26 compute-1 NetworkManager[48870]: <info>  [1763978126.7218] device (tape962e27f-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 09:55:26 compute-1 systemd[1]: Started Virtual Machine qemu-2-instance-00000003.
Nov 24 09:55:26 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.740 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[df66db83-f7e4-4d13-ac5e-bb3a248b8b7d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:55:26 compute-1 nova_compute[230010]: 2025-11-24 09:55:26.749 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:26 compute-1 ovn_controller[132966]: 2025-11-24T09:55:26Z|00038|binding|INFO|Setting lport e962e27f-80bf-4103-98ae-d8af84c6fc28 ovn-installed in OVS
Nov 24 09:55:26 compute-1 ovn_controller[132966]: 2025-11-24T09:55:26Z|00039|binding|INFO|Setting lport e962e27f-80bf-4103-98ae-d8af84c6fc28 up in Southbound
Nov 24 09:55:26 compute-1 nova_compute[230010]: 2025-11-24 09:55:26.755 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:26 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.772 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[fa61ddb0-cdfd-48f7-9ced-6b575cae3205]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:55:26 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.778 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[f62133d9-09cf-4b80-b950-5d6b009d4d20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:55:26 compute-1 NetworkManager[48870]: <info>  [1763978126.7804] manager: (tap636fec29-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Nov 24 09:55:26 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.815 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[40e8d140-65d8-4afe-94ce-9b20e831f44f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:55:26 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.820 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[e8ecc4f7-3a11-4c3c-9336-41995b7353a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:55:26 compute-1 NetworkManager[48870]: <info>  [1763978126.8461] device (tap636fec29-e0): carrier: link connected
Nov 24 09:55:26 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.852 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[690bad71-3be6-4ffa-8d0e-6e6990ab896c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:55:26 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.875 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[c8ec0a3b-0b74-4623-8be4-0fb838e6a84d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap636fec29-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3d:23:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407039, 'reachable_time': 23832, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235814, 'error': None, 'target': 'ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:55:26 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.893 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[a20af0ad-4e22-4afb-9fb6-077693dc244c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3d:23e6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407039, 'tstamp': 407039}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 235815, 'error': None, 'target': 'ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:55:26 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.914 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[4695b96c-d2ea-4664-a63d-7ebd0f854f3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap636fec29-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3d:23:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407039, 'reachable_time': 23832, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 235816, 'error': None, 'target': 'ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:55:26 compute-1 nova_compute[230010]: 2025-11-24 09:55:26.937 230014 DEBUG nova.compute.manager [req-90a6330a-ef3a-45e6-9a9a-00673397ee49 req-0d86d481-d2c9-4543-a643-e981647f39c6 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-vif-plugged-e962e27f-80bf-4103-98ae-d8af84c6fc28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:55:26 compute-1 nova_compute[230010]: 2025-11-24 09:55:26.938 230014 DEBUG oslo_concurrency.lockutils [req-90a6330a-ef3a-45e6-9a9a-00673397ee49 req-0d86d481-d2c9-4543-a643-e981647f39c6 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:55:26 compute-1 nova_compute[230010]: 2025-11-24 09:55:26.938 230014 DEBUG oslo_concurrency.lockutils [req-90a6330a-ef3a-45e6-9a9a-00673397ee49 req-0d86d481-d2c9-4543-a643-e981647f39c6 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:55:26 compute-1 nova_compute[230010]: 2025-11-24 09:55:26.939 230014 DEBUG oslo_concurrency.lockutils [req-90a6330a-ef3a-45e6-9a9a-00673397ee49 req-0d86d481-d2c9-4543-a643-e981647f39c6 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:55:26 compute-1 nova_compute[230010]: 2025-11-24 09:55:26.939 230014 DEBUG nova.compute.manager [req-90a6330a-ef3a-45e6-9a9a-00673397ee49 req-0d86d481-d2c9-4543-a643-e981647f39c6 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Processing event network-vif-plugged-e962e27f-80bf-4103-98ae-d8af84c6fc28 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 09:55:26 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.953 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[9bcfb7e8-fa9b-4069-98ae-0cbf9d16a6a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:27.020 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[f4207494-6ea8-4f69-bdf8-561abbf87a80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:27.022 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap636fec29-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:27.022 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:27.023 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap636fec29-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.025 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:27 compute-1 NetworkManager[48870]: <info>  [1763978127.0269] manager: (tap636fec29-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Nov 24 09:55:27 compute-1 kernel: tap636fec29-e0: entered promiscuous mode
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.027 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:27.032 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap636fec29-e0, col_values=(('external_ids', {'iface-id': '9bf2a93f-cf2b-4180-87cd-4fecaf4abe0b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.045 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.047 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:27 compute-1 ovn_controller[132966]: 2025-11-24T09:55:27Z|00040|binding|INFO|Releasing lport 9bf2a93f-cf2b-4180-87cd-4fecaf4abe0b from this chassis (sb_readonly=0)
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:27.049 142336 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/636fec29-e18e-45f1-aabc-369f5fd0d593.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/636fec29-e18e-45f1-aabc-369f5fd0d593.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:27.052 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[1feda7d0-8c95-475f-a7ae-e6f3f0c42e2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:27.053 142336 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]: global
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]:     log         /dev/log local0 debug
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]:     log-tag     haproxy-metadata-proxy-636fec29-e18e-45f1-aabc-369f5fd0d593
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]:     user        root
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]:     group       root
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]:     maxconn     1024
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]:     pidfile     /var/lib/neutron/external/pids/636fec29-e18e-45f1-aabc-369f5fd0d593.pid.haproxy
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]:     daemon
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]: 
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]: defaults
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]:     log global
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]:     mode http
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]:     option httplog
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]:     option dontlognull
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]:     option http-server-close
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]:     option forwardfor
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]:     retries                 3
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]:     timeout http-request    30s
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]:     timeout connect         30s
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]:     timeout client          32s
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]:     timeout server          32s
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]:     timeout http-keep-alive 30s
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]: 
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]: 
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]: listen listener
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]:     bind 169.254.169.254:80
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]:     server metadata /var/lib/neutron/metadata_proxy
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]:     http-request add-header X-OVN-Network-ID 636fec29-e18e-45f1-aabc-369f5fd0d593
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 24 09:55:27 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:55:27.054 142336 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593', 'env', 'PROCESS_TAG=haproxy-636fec29-e18e-45f1-aabc-369f5fd0d593', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/636fec29-e18e-45f1-aabc-369f5fd0d593.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.059 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:27 compute-1 ceph-mon[80009]: pgmap v856: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:55:27 compute-1 podman[235855]: 2025-11-24 09:55:27.495466713 +0000 UTC m=+0.054483114 container create 2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:55:27 compute-1 systemd[1]: Started libpod-conmon-2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf.scope.
Nov 24 09:55:27 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:55:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d411d6d830a7518401f278920271906d8077c3d66c4ff88eb3a65c883bda73c0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 09:55:27 compute-1 podman[235855]: 2025-11-24 09:55:27.468123435 +0000 UTC m=+0.027139856 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 09:55:27 compute-1 podman[235855]: 2025-11-24 09:55:27.566686034 +0000 UTC m=+0.125702445 container init 2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 24 09:55:27 compute-1 podman[235855]: 2025-11-24 09:55:27.575382526 +0000 UTC m=+0.134398927 container start 2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:55:27 compute-1 neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593[235902]: [NOTICE]   (235909) : New worker (235912) forked
Nov 24 09:55:27 compute-1 neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593[235902]: [NOTICE]   (235909) : Loading success.
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.621 230014 DEBUG nova.compute.manager [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.623 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978127.620338, 8e009e75-a97b-4c5d-a470-5db1137cb407 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.623 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] VM Started (Lifecycle Event)
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.627 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.632 230014 INFO nova.virt.libvirt.driver [-] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Instance spawned successfully.
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.632 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.641 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.648 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.652 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.653 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.654 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.654 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.654 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.655 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.678 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.679 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978127.6217556, 8e009e75-a97b-4c5d-a470-5db1137cb407 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.679 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] VM Paused (Lifecycle Event)
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.701 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.705 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978127.6270826, 8e009e75-a97b-4c5d-a470-5db1137cb407 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.705 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] VM Resumed (Lifecycle Event)
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.722 230014 INFO nova.compute.manager [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Took 5.71 seconds to spawn the instance on the hypervisor.
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.723 230014 DEBUG nova.compute.manager [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.724 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.731 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.759 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.782 230014 INFO nova.compute.manager [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Took 6.51 seconds to build instance.
Nov 24 09:55:27 compute-1 nova_compute[230010]: 2025-11-24 09:55:27.794 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:55:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:28.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:28.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:29 compute-1 nova_compute[230010]: 2025-11-24 09:55:29.016 230014 DEBUG nova.compute.manager [req-bd7a9114-0c1b-4057-80fa-07b20afad1c9 req-1a6981c0-cd3a-406a-ba12-6393ed89d3ab 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-vif-plugged-e962e27f-80bf-4103-98ae-d8af84c6fc28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:55:29 compute-1 nova_compute[230010]: 2025-11-24 09:55:29.017 230014 DEBUG oslo_concurrency.lockutils [req-bd7a9114-0c1b-4057-80fa-07b20afad1c9 req-1a6981c0-cd3a-406a-ba12-6393ed89d3ab 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:55:29 compute-1 nova_compute[230010]: 2025-11-24 09:55:29.017 230014 DEBUG oslo_concurrency.lockutils [req-bd7a9114-0c1b-4057-80fa-07b20afad1c9 req-1a6981c0-cd3a-406a-ba12-6393ed89d3ab 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:55:29 compute-1 nova_compute[230010]: 2025-11-24 09:55:29.018 230014 DEBUG oslo_concurrency.lockutils [req-bd7a9114-0c1b-4057-80fa-07b20afad1c9 req-1a6981c0-cd3a-406a-ba12-6393ed89d3ab 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:55:29 compute-1 nova_compute[230010]: 2025-11-24 09:55:29.018 230014 DEBUG nova.compute.manager [req-bd7a9114-0c1b-4057-80fa-07b20afad1c9 req-1a6981c0-cd3a-406a-ba12-6393ed89d3ab 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] No waiting events found dispatching network-vif-plugged-e962e27f-80bf-4103-98ae-d8af84c6fc28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 09:55:29 compute-1 nova_compute[230010]: 2025-11-24 09:55:29.018 230014 WARNING nova.compute.manager [req-bd7a9114-0c1b-4057-80fa-07b20afad1c9 req-1a6981c0-cd3a-406a-ba12-6393ed89d3ab 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received unexpected event network-vif-plugged-e962e27f-80bf-4103-98ae-d8af84c6fc28 for instance with vm_state active and task_state None.
Nov 24 09:55:29 compute-1 ceph-mon[80009]: pgmap v857: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 24 09:55:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:55:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:30.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:55:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:55:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:30.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:30 compute-1 nova_compute[230010]: 2025-11-24 09:55:30.715 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:30 compute-1 nova_compute[230010]: 2025-11-24 09:55:30.727 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:31 compute-1 podman[235924]: 2025-11-24 09:55:31.329096741 +0000 UTC m=+0.066945028 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:55:31 compute-1 ceph-mon[80009]: pgmap v858: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 24 09:55:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:55:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:32.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:32 compute-1 ovn_controller[132966]: 2025-11-24T09:55:32Z|00041|binding|INFO|Releasing lport 9bf2a93f-cf2b-4180-87cd-4fecaf4abe0b from this chassis (sb_readonly=0)
Nov 24 09:55:32 compute-1 NetworkManager[48870]: <info>  [1763978132.4696] manager: (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Nov 24 09:55:32 compute-1 NetworkManager[48870]: <info>  [1763978132.4708] manager: (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Nov 24 09:55:32 compute-1 nova_compute[230010]: 2025-11-24 09:55:32.468 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:32 compute-1 ovn_controller[132966]: 2025-11-24T09:55:32Z|00042|binding|INFO|Releasing lport 9bf2a93f-cf2b-4180-87cd-4fecaf4abe0b from this chassis (sb_readonly=0)
Nov 24 09:55:32 compute-1 nova_compute[230010]: 2025-11-24 09:55:32.508 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:32 compute-1 nova_compute[230010]: 2025-11-24 09:55:32.514 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:32.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:32 compute-1 nova_compute[230010]: 2025-11-24 09:55:32.690 230014 DEBUG nova.compute.manager [req-028bf575-eecc-4078-ae42-00d76a1f067e req-edf32178-6b99-4ce9-a552-7820a65dfb00 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-changed-e962e27f-80bf-4103-98ae-d8af84c6fc28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:55:32 compute-1 nova_compute[230010]: 2025-11-24 09:55:32.691 230014 DEBUG nova.compute.manager [req-028bf575-eecc-4078-ae42-00d76a1f067e req-edf32178-6b99-4ce9-a552-7820a65dfb00 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Refreshing instance network info cache due to event network-changed-e962e27f-80bf-4103-98ae-d8af84c6fc28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 09:55:32 compute-1 nova_compute[230010]: 2025-11-24 09:55:32.691 230014 DEBUG oslo_concurrency.lockutils [req-028bf575-eecc-4078-ae42-00d76a1f067e req-edf32178-6b99-4ce9-a552-7820a65dfb00 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:55:32 compute-1 nova_compute[230010]: 2025-11-24 09:55:32.691 230014 DEBUG oslo_concurrency.lockutils [req-028bf575-eecc-4078-ae42-00d76a1f067e req-edf32178-6b99-4ce9-a552-7820a65dfb00 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:55:32 compute-1 nova_compute[230010]: 2025-11-24 09:55:32.691 230014 DEBUG nova.network.neutron [req-028bf575-eecc-4078-ae42-00d76a1f067e req-edf32178-6b99-4ce9-a552-7820a65dfb00 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Refreshing network info cache for port e962e27f-80bf-4103-98ae-d8af84c6fc28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 09:55:33 compute-1 ceph-mon[80009]: pgmap v859: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 24 09:55:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:34.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:55:34 compute-1 nova_compute[230010]: 2025-11-24 09:55:34.438 230014 DEBUG nova.network.neutron [req-028bf575-eecc-4078-ae42-00d76a1f067e req-edf32178-6b99-4ce9-a552-7820a65dfb00 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updated VIF entry in instance network info cache for port e962e27f-80bf-4103-98ae-d8af84c6fc28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 09:55:34 compute-1 nova_compute[230010]: 2025-11-24 09:55:34.439 230014 DEBUG nova.network.neutron [req-028bf575-eecc-4078-ae42-00d76a1f067e req-edf32178-6b99-4ce9-a552-7820a65dfb00 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updating instance_info_cache with network_info: [{"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:55:34 compute-1 nova_compute[230010]: 2025-11-24 09:55:34.462 230014 DEBUG oslo_concurrency.lockutils [req-028bf575-eecc-4078-ae42-00d76a1f067e req-edf32178-6b99-4ce9-a552-7820a65dfb00 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:55:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:34.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:35 compute-1 podman[235948]: 2025-11-24 09:55:35.337958163 +0000 UTC m=+0.077483885 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:55:35 compute-1 ceph-mon[80009]: pgmap v860: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 24 09:55:35 compute-1 nova_compute[230010]: 2025-11-24 09:55:35.757 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:36.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:36.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:37 compute-1 ceph-mon[80009]: pgmap v861: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 24 09:55:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:38.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:38.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:55:39 compute-1 ceph-mon[80009]: pgmap v862: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 24 09:55:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:40.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:40 compute-1 ceph-mon[80009]: pgmap v863: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 op/s
Nov 24 09:55:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:40.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:40 compute-1 nova_compute[230010]: 2025-11-24 09:55:40.758 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:40 compute-1 ovn_controller[132966]: 2025-11-24T09:55:40Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a4:f1:71 10.100.0.6
Nov 24 09:55:40 compute-1 ovn_controller[132966]: 2025-11-24T09:55:40Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a4:f1:71 10.100.0.6
Nov 24 09:55:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:55:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:42.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:55:42 compute-1 podman[235978]: 2025-11-24 09:55:42.335465424 +0000 UTC m=+0.070319760 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 09:55:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:42.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:43 compute-1 ceph-mon[80009]: pgmap v864: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Nov 24 09:55:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:44.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:55:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:44.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:45 compute-1 ceph-mon[80009]: pgmap v865: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 24 09:55:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:55:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:55:45 compute-1 nova_compute[230010]: 2025-11-24 09:55:45.761 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 09:55:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:46.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:55:46 compute-1 sudo[235999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:55:46 compute-1 sudo[235999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:55:46 compute-1 sudo[235999]: pam_unix(sudo:session): session closed for user root
Nov 24 09:55:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:55:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:46.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:55:47 compute-1 nova_compute[230010]: 2025-11-24 09:55:47.124 230014 INFO nova.compute.manager [None req-5d1b84b3-86be-48aa-8fdf-0c6f72c489ed 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Get console output
Nov 24 09:55:47 compute-1 nova_compute[230010]: 2025-11-24 09:55:47.130 230014 INFO oslo.privsep.daemon [None req-5d1b84b3-86be-48aa-8fdf-0c6f72c489ed 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp9rh8lyep/privsep.sock']
Nov 24 09:55:47 compute-1 ceph-mon[80009]: pgmap v866: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 24 09:55:47 compute-1 nova_compute[230010]: 2025-11-24 09:55:47.853 230014 INFO oslo.privsep.daemon [None req-5d1b84b3-86be-48aa-8fdf-0c6f72c489ed 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Spawned new privsep daemon via rootwrap
Nov 24 09:55:47 compute-1 nova_compute[230010]: 2025-11-24 09:55:47.699 236028 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 24 09:55:47 compute-1 nova_compute[230010]: 2025-11-24 09:55:47.703 236028 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 24 09:55:47 compute-1 nova_compute[230010]: 2025-11-24 09:55:47.705 236028 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 24 09:55:47 compute-1 nova_compute[230010]: 2025-11-24 09:55:47.706 236028 INFO oslo.privsep.daemon [-] privsep daemon running as pid 236028
Nov 24 09:55:47 compute-1 nova_compute[230010]: 2025-11-24 09:55:47.940 236028 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 24 09:55:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:48.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:48.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:49 compute-1 ceph-mon[80009]: pgmap v867: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 24 09:55:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:55:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:50.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:50.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:50 compute-1 nova_compute[230010]: 2025-11-24 09:55:50.764 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 09:55:51 compute-1 ceph-mon[80009]: pgmap v868: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 24 09:55:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:52.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:52.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:53 compute-1 ceph-mon[80009]: pgmap v869: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 392 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Nov 24 09:55:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:54.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:55:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:54.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:55 compute-1 ceph-mon[80009]: pgmap v870: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 12 KiB/s wr, 1 op/s
Nov 24 09:55:55 compute-1 nova_compute[230010]: 2025-11-24 09:55:55.766 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 09:55:55 compute-1 nova_compute[230010]: 2025-11-24 09:55:55.768 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:55 compute-1 nova_compute[230010]: 2025-11-24 09:55:55.768 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 24 09:55:55 compute-1 nova_compute[230010]: 2025-11-24 09:55:55.768 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 09:55:55 compute-1 nova_compute[230010]: 2025-11-24 09:55:55.769 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 09:55:55 compute-1 nova_compute[230010]: 2025-11-24 09:55:55.771 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:56.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:56.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:56 compute-1 ceph-mon[80009]: pgmap v871: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 12 KiB/s wr, 1 op/s
Nov 24 09:55:56 compute-1 nova_compute[230010]: 2025-11-24 09:55:56.861 230014 DEBUG oslo_concurrency.lockutils [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "interface-8e009e75-a97b-4c5d-a470-5db1137cb407-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:55:56 compute-1 nova_compute[230010]: 2025-11-24 09:55:56.862 230014 DEBUG oslo_concurrency.lockutils [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "interface-8e009e75-a97b-4c5d-a470-5db1137cb407-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:55:56 compute-1 nova_compute[230010]: 2025-11-24 09:55:56.862 230014 DEBUG nova.objects.instance [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'flavor' on Instance uuid 8e009e75-a97b-4c5d-a470-5db1137cb407 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:55:57 compute-1 nova_compute[230010]: 2025-11-24 09:55:57.434 230014 DEBUG nova.objects.instance [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'pci_requests' on Instance uuid 8e009e75-a97b-4c5d-a470-5db1137cb407 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:55:57 compute-1 nova_compute[230010]: 2025-11-24 09:55:57.449 230014 DEBUG nova.network.neutron [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 09:55:57 compute-1 nova_compute[230010]: 2025-11-24 09:55:57.595 230014 DEBUG nova.policy [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '43f79ff3105e4372a3c095e8057d4f1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '94d069fc040647d5a6e54894eec915fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 09:55:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:58.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:58 compute-1 nova_compute[230010]: 2025-11-24 09:55:58.338 230014 DEBUG nova.network.neutron [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Successfully created port: faa80fbe-f017-47cd-96c8-ca0747a39410 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 24 09:55:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:55:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:58.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:58 compute-1 nova_compute[230010]: 2025-11-24 09:55:58.993 230014 DEBUG nova.network.neutron [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Successfully updated port: faa80fbe-f017-47cd-96c8-ca0747a39410 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 09:55:59 compute-1 nova_compute[230010]: 2025-11-24 09:55:59.005 230014 DEBUG oslo_concurrency.lockutils [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:55:59 compute-1 nova_compute[230010]: 2025-11-24 09:55:59.005 230014 DEBUG oslo_concurrency.lockutils [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:55:59 compute-1 nova_compute[230010]: 2025-11-24 09:55:59.005 230014 DEBUG nova.network.neutron [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 09:55:59 compute-1 nova_compute[230010]: 2025-11-24 09:55:59.092 230014 DEBUG nova.compute.manager [req-53c9d336-2387-470f-bea6-87d4b3cc617a req-e43fcafc-9e08-4c4c-b52a-c499591a77a3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-changed-faa80fbe-f017-47cd-96c8-ca0747a39410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:55:59 compute-1 nova_compute[230010]: 2025-11-24 09:55:59.093 230014 DEBUG nova.compute.manager [req-53c9d336-2387-470f-bea6-87d4b3cc617a req-e43fcafc-9e08-4c4c-b52a-c499591a77a3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Refreshing instance network info cache due to event network-changed-faa80fbe-f017-47cd-96c8-ca0747a39410. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 09:55:59 compute-1 nova_compute[230010]: 2025-11-24 09:55:59.093 230014 DEBUG oslo_concurrency.lockutils [req-53c9d336-2387-470f-bea6-87d4b3cc617a req-e43fcafc-9e08-4c4c-b52a-c499591a77a3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:55:59 compute-1 ceph-mon[80009]: pgmap v872: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 14 KiB/s wr, 2 op/s
Nov 24 09:55:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:56:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:00.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:56:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.627 230014 DEBUG nova.network.neutron [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updating instance_info_cache with network_info: [{"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.643 230014 DEBUG oslo_concurrency.lockutils [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.644 230014 DEBUG oslo_concurrency.lockutils [req-53c9d336-2387-470f-bea6-87d4b3cc617a req-e43fcafc-9e08-4c4c-b52a-c499591a77a3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.644 230014 DEBUG nova.network.neutron [req-53c9d336-2387-470f-bea6-87d4b3cc617a req-e43fcafc-9e08-4c4c-b52a-c499591a77a3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Refreshing network info cache for port faa80fbe-f017-47cd-96c8-ca0747a39410 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.647 230014 DEBUG nova.virt.libvirt.vif [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:55:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1469091475',display_name='tempest-TestNetworkBasicOps-server-1469091475',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1469091475',id=3,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZZQMjyTSwgkfidZfPhBPgBcV62YzWExHHXsl1BnsLfJjAX1c531QA8puLkgpD93eEa7lPae/Gh1kFnVkWZAW6FTPgZg7BzeD7RovkQcC7HReAVJUg962qa1kvY0rkgvg==',key_name='tempest-TestNetworkBasicOps-853741544',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:55:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-mfq37y5p',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:55:27Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=8e009e75-a97b-4c5d-a470-5db1137cb407,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.648 230014 DEBUG nova.network.os_vif_util [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.648 230014 DEBUG nova.network.os_vif_util [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:de:6c,bridge_name='br-int',has_traffic_filtering=True,id=faa80fbe-f017-47cd-96c8-ca0747a39410,network=Network(2dfea9d1-73f4-435f-ade1-dce53efe0c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaa80fbe-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.649 230014 DEBUG os_vif [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:de:6c,bridge_name='br-int',has_traffic_filtering=True,id=faa80fbe-f017-47cd-96c8-ca0747a39410,network=Network(2dfea9d1-73f4-435f-ade1-dce53efe0c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaa80fbe-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.649 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.650 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.650 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 09:56:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:00.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.658 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.658 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfaa80fbe-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.658 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfaa80fbe-f0, col_values=(('external_ids', {'iface-id': 'faa80fbe-f017-47cd-96c8-ca0747a39410', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:de:6c', 'vm-uuid': '8e009e75-a97b-4c5d-a470-5db1137cb407'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.660 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:00 compute-1 NetworkManager[48870]: <info>  [1763978160.6615] manager: (tapfaa80fbe-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.663 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.666 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.667 230014 INFO os_vif [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:de:6c,bridge_name='br-int',has_traffic_filtering=True,id=faa80fbe-f017-47cd-96c8-ca0747a39410,network=Network(2dfea9d1-73f4-435f-ade1-dce53efe0c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaa80fbe-f0')
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.668 230014 DEBUG nova.virt.libvirt.vif [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:55:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1469091475',display_name='tempest-TestNetworkBasicOps-server-1469091475',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1469091475',id=3,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZZQMjyTSwgkfidZfPhBPgBcV62YzWExHHXsl1BnsLfJjAX1c531QA8puLkgpD93eEa7lPae/Gh1kFnVkWZAW6FTPgZg7BzeD7RovkQcC7HReAVJUg962qa1kvY0rkgvg==',key_name='tempest-TestNetworkBasicOps-853741544',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:55:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-mfq37y5p',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:55:27Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=8e009e75-a97b-4c5d-a470-5db1137cb407,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.668 230014 DEBUG nova.network.os_vif_util [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.669 230014 DEBUG nova.network.os_vif_util [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:de:6c,bridge_name='br-int',has_traffic_filtering=True,id=faa80fbe-f017-47cd-96c8-ca0747a39410,network=Network(2dfea9d1-73f4-435f-ade1-dce53efe0c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaa80fbe-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.672 230014 DEBUG nova.virt.libvirt.guest [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] attach device xml: <interface type="ethernet">
Nov 24 09:56:00 compute-1 nova_compute[230010]:   <mac address="fa:16:3e:23:de:6c"/>
Nov 24 09:56:00 compute-1 nova_compute[230010]:   <model type="virtio"/>
Nov 24 09:56:00 compute-1 nova_compute[230010]:   <driver name="vhost" rx_queue_size="512"/>
Nov 24 09:56:00 compute-1 nova_compute[230010]:   <mtu size="1442"/>
Nov 24 09:56:00 compute-1 nova_compute[230010]:   <target dev="tapfaa80fbe-f0"/>
Nov 24 09:56:00 compute-1 nova_compute[230010]: </interface>
Nov 24 09:56:00 compute-1 nova_compute[230010]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 24 09:56:00 compute-1 kernel: tapfaa80fbe-f0: entered promiscuous mode
Nov 24 09:56:00 compute-1 NetworkManager[48870]: <info>  [1763978160.6842] manager: (tapfaa80fbe-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Nov 24 09:56:00 compute-1 ovn_controller[132966]: 2025-11-24T09:56:00Z|00043|binding|INFO|Claiming lport faa80fbe-f017-47cd-96c8-ca0747a39410 for this chassis.
Nov 24 09:56:00 compute-1 ovn_controller[132966]: 2025-11-24T09:56:00Z|00044|binding|INFO|faa80fbe-f017-47cd-96c8-ca0747a39410: Claiming fa:16:3e:23:de:6c 10.100.0.29
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.687 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.694 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:de:6c 10.100.0.29'], port_security=['fa:16:3e:23:de:6c 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': '8e009e75-a97b-4c5d-a470-5db1137cb407', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dfea9d1-73f4-435f-ade1-dce53efe0c39', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '33c3a403-57a0-4b88-8817-f12f4bfc92ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3401f0a-5661-425e-b817-1a9ea0eafa9c, chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=faa80fbe-f017-47cd-96c8-ca0747a39410) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.696 142336 INFO neutron.agent.ovn.metadata.agent [-] Port faa80fbe-f017-47cd-96c8-ca0747a39410 in datapath 2dfea9d1-73f4-435f-ade1-dce53efe0c39 bound to our chassis
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.697 142336 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2dfea9d1-73f4-435f-ade1-dce53efe0c39
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.710 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[1d2705d1-68d0-4f02-87ae-6fc157d7f5ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.710 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2dfea9d1-71 in ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.711 234803 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2dfea9d1-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.712 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[ce431c8f-ffdc-4e5f-a2b7-937d0f8d3ef2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.712 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[df5f86fa-9775-4460-afc5-6a2617a7fe02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:00 compute-1 systemd-udevd[236045]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.724 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.725 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[d7cf6f4a-246e-415b-8949-a6484fd388da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:00 compute-1 ovn_controller[132966]: 2025-11-24T09:56:00Z|00045|binding|INFO|Setting lport faa80fbe-f017-47cd-96c8-ca0747a39410 ovn-installed in OVS
Nov 24 09:56:00 compute-1 ovn_controller[132966]: 2025-11-24T09:56:00Z|00046|binding|INFO|Setting lport faa80fbe-f017-47cd-96c8-ca0747a39410 up in Southbound
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.732 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:00 compute-1 NetworkManager[48870]: <info>  [1763978160.7361] device (tapfaa80fbe-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 09:56:00 compute-1 NetworkManager[48870]: <info>  [1763978160.7380] device (tapfaa80fbe-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.750 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[65b2c5d0-08f2-4be9-bd14-5ae0ce6aa90f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.770 230014 DEBUG nova.virt.libvirt.driver [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.770 230014 DEBUG nova.virt.libvirt.driver [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.770 230014 DEBUG nova.virt.libvirt.driver [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:a4:f1:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.770 230014 DEBUG nova.virt.libvirt.driver [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:23:de:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.772 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.776 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[bc02ad44-d6d0-4084-a4df-ac9b8dad3735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:00 compute-1 systemd-udevd[236048]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 09:56:00 compute-1 NetworkManager[48870]: <info>  [1763978160.7820] manager: (tap2dfea9d1-70): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.782 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[935387a5-024b-44ba-b3a8-c39d0c98cea4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.803 230014 DEBUG nova.virt.libvirt.guest [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 09:56:00 compute-1 nova_compute[230010]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 09:56:00 compute-1 nova_compute[230010]:   <nova:name>tempest-TestNetworkBasicOps-server-1469091475</nova:name>
Nov 24 09:56:00 compute-1 nova_compute[230010]:   <nova:creationTime>2025-11-24 09:56:00</nova:creationTime>
Nov 24 09:56:00 compute-1 nova_compute[230010]:   <nova:flavor name="m1.nano">
Nov 24 09:56:00 compute-1 nova_compute[230010]:     <nova:memory>128</nova:memory>
Nov 24 09:56:00 compute-1 nova_compute[230010]:     <nova:disk>1</nova:disk>
Nov 24 09:56:00 compute-1 nova_compute[230010]:     <nova:swap>0</nova:swap>
Nov 24 09:56:00 compute-1 nova_compute[230010]:     <nova:ephemeral>0</nova:ephemeral>
Nov 24 09:56:00 compute-1 nova_compute[230010]:     <nova:vcpus>1</nova:vcpus>
Nov 24 09:56:00 compute-1 nova_compute[230010]:   </nova:flavor>
Nov 24 09:56:00 compute-1 nova_compute[230010]:   <nova:owner>
Nov 24 09:56:00 compute-1 nova_compute[230010]:     <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 09:56:00 compute-1 nova_compute[230010]:     <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 09:56:00 compute-1 nova_compute[230010]:   </nova:owner>
Nov 24 09:56:00 compute-1 nova_compute[230010]:   <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 09:56:00 compute-1 nova_compute[230010]:   <nova:ports>
Nov 24 09:56:00 compute-1 nova_compute[230010]:     <nova:port uuid="e962e27f-80bf-4103-98ae-d8af84c6fc28">
Nov 24 09:56:00 compute-1 nova_compute[230010]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 24 09:56:00 compute-1 nova_compute[230010]:     </nova:port>
Nov 24 09:56:00 compute-1 nova_compute[230010]:     <nova:port uuid="faa80fbe-f017-47cd-96c8-ca0747a39410">
Nov 24 09:56:00 compute-1 nova_compute[230010]:       <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Nov 24 09:56:00 compute-1 nova_compute[230010]:     </nova:port>
Nov 24 09:56:00 compute-1 nova_compute[230010]:   </nova:ports>
Nov 24 09:56:00 compute-1 nova_compute[230010]: </nova:instance>
Nov 24 09:56:00 compute-1 nova_compute[230010]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.809 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[5dcf3d51-7575-4b72-badf-524d0526dbe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.811 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[5813e6e7-97ee-4f33-a80b-782cd6c1c5d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:00 compute-1 NetworkManager[48870]: <info>  [1763978160.8352] device (tap2dfea9d1-70): carrier: link connected
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.835 230014 DEBUG oslo_concurrency.lockutils [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "interface-8e009e75-a97b-4c5d-a470-5db1137cb407-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 3.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.839 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[d173bdad-96e9-4cd5-8c4a-b8a98f91a319]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.854 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[be311bdc-8d86-4184-a1d7-6e953874c332]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2dfea9d1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:b1:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 410438, 'reachable_time': 32674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 236071, 'error': None, 'target': 'ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.868 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[f0742820-0528-4261-8a9e-feb2bd09c64f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe94:b157'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 410438, 'tstamp': 410438}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 236072, 'error': None, 'target': 'ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.887 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[37ee95bc-b813-4864-80cb-408a1394c8cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2dfea9d1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:b1:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 410438, 'reachable_time': 32674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 236073, 'error': None, 'target': 'ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.918 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[13450c87-8524-4553-9185-de5094ada784]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.971 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[94cf838d-0408-4291-b03a-46eb45ac5bb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.972 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2dfea9d1-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.972 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.973 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2dfea9d1-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.974 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:00 compute-1 NetworkManager[48870]: <info>  [1763978160.9751] manager: (tap2dfea9d1-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Nov 24 09:56:00 compute-1 kernel: tap2dfea9d1-70: entered promiscuous mode
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.977 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.977 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2dfea9d1-70, col_values=(('external_ids', {'iface-id': '1c47826d-7d98-41f9-bde5-d6e4ced7b639'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.978 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:00 compute-1 ovn_controller[132966]: 2025-11-24T09:56:00Z|00047|binding|INFO|Releasing lport 1c47826d-7d98-41f9-bde5-d6e4ced7b639 from this chassis (sb_readonly=0)
Nov 24 09:56:00 compute-1 nova_compute[230010]: 2025-11-24 09:56:00.991 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.992 142336 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2dfea9d1-73f4-435f-ade1-dce53efe0c39.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2dfea9d1-73f4-435f-ade1-dce53efe0c39.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.993 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[5861299b-a240-4c29-bfad-f6508d974585]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.994 142336 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: global
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]:     log         /dev/log local0 debug
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]:     log-tag     haproxy-metadata-proxy-2dfea9d1-73f4-435f-ade1-dce53efe0c39
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]:     user        root
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]:     group       root
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]:     maxconn     1024
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]:     pidfile     /var/lib/neutron/external/pids/2dfea9d1-73f4-435f-ade1-dce53efe0c39.pid.haproxy
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]:     daemon
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: defaults
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]:     log global
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]:     mode http
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]:     option httplog
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]:     option dontlognull
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]:     option http-server-close
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]:     option forwardfor
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]:     retries                 3
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]:     timeout http-request    30s
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]:     timeout connect         30s
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]:     timeout client          32s
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]:     timeout server          32s
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]:     timeout http-keep-alive 30s
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: listen listener
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]:     bind 169.254.169.254:80
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]:     server metadata /var/lib/neutron/metadata_proxy
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]:     http-request add-header X-OVN-Network-ID 2dfea9d1-73f4-435f-ade1-dce53efe0c39
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 24 09:56:00 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.994 142336 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39', 'env', 'PROCESS_TAG=haproxy-2dfea9d1-73f4-435f-ade1-dce53efe0c39', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2dfea9d1-73f4-435f-ade1-dce53efe0c39.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 24 09:56:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 09:56:01 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2361167424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:56:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 09:56:01 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2361167424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:56:01 compute-1 nova_compute[230010]: 2025-11-24 09:56:01.170 230014 DEBUG nova.compute.manager [req-9312f730-fd44-40ee-89dd-87e6cd5b55b6 req-060b8118-81e3-4c58-9b51-1883427682a2 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-vif-plugged-faa80fbe-f017-47cd-96c8-ca0747a39410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:56:01 compute-1 nova_compute[230010]: 2025-11-24 09:56:01.171 230014 DEBUG oslo_concurrency.lockutils [req-9312f730-fd44-40ee-89dd-87e6cd5b55b6 req-060b8118-81e3-4c58-9b51-1883427682a2 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:56:01 compute-1 nova_compute[230010]: 2025-11-24 09:56:01.172 230014 DEBUG oslo_concurrency.lockutils [req-9312f730-fd44-40ee-89dd-87e6cd5b55b6 req-060b8118-81e3-4c58-9b51-1883427682a2 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:56:01 compute-1 nova_compute[230010]: 2025-11-24 09:56:01.172 230014 DEBUG oslo_concurrency.lockutils [req-9312f730-fd44-40ee-89dd-87e6cd5b55b6 req-060b8118-81e3-4c58-9b51-1883427682a2 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:56:01 compute-1 nova_compute[230010]: 2025-11-24 09:56:01.173 230014 DEBUG nova.compute.manager [req-9312f730-fd44-40ee-89dd-87e6cd5b55b6 req-060b8118-81e3-4c58-9b51-1883427682a2 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] No waiting events found dispatching network-vif-plugged-faa80fbe-f017-47cd-96c8-ca0747a39410 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 09:56:01 compute-1 nova_compute[230010]: 2025-11-24 09:56:01.174 230014 WARNING nova.compute.manager [req-9312f730-fd44-40ee-89dd-87e6cd5b55b6 req-060b8118-81e3-4c58-9b51-1883427682a2 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received unexpected event network-vif-plugged-faa80fbe-f017-47cd-96c8-ca0747a39410 for instance with vm_state active and task_state None.
Nov 24 09:56:01 compute-1 ceph-mon[80009]: pgmap v873: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 2.0 KiB/s wr, 1 op/s
Nov 24 09:56:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:56:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/2361167424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:56:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/2361167424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:56:01 compute-1 podman[236105]: 2025-11-24 09:56:01.359935635 +0000 UTC m=+0.044039199 container create 2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 24 09:56:01 compute-1 systemd[1]: Started libpod-conmon-2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b.scope.
Nov 24 09:56:01 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:56:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/159129d6836993ad22c9e4f7333101e324e380206278a56bdc9a5ccf6ea421e5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 09:56:01 compute-1 podman[236105]: 2025-11-24 09:56:01.423230832 +0000 UTC m=+0.107334436 container init 2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 09:56:01 compute-1 podman[236105]: 2025-11-24 09:56:01.430364546 +0000 UTC m=+0.114468110 container start 2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 24 09:56:01 compute-1 podman[236105]: 2025-11-24 09:56:01.335832765 +0000 UTC m=+0.019936359 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 09:56:01 compute-1 neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39[236121]: [NOTICE]   (236138) : New worker (236144) forked
Nov 24 09:56:01 compute-1 neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39[236121]: [NOTICE]   (236138) : Loading success.
Nov 24 09:56:01 compute-1 podman[236118]: 2025-11-24 09:56:01.455261245 +0000 UTC m=+0.057727973 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 09:56:01 compute-1 nova_compute[230010]: 2025-11-24 09:56:01.979 230014 DEBUG nova.network.neutron [req-53c9d336-2387-470f-bea6-87d4b3cc617a req-e43fcafc-9e08-4c4c-b52a-c499591a77a3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updated VIF entry in instance network info cache for port faa80fbe-f017-47cd-96c8-ca0747a39410. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 09:56:01 compute-1 nova_compute[230010]: 2025-11-24 09:56:01.980 230014 DEBUG nova.network.neutron [req-53c9d336-2387-470f-bea6-87d4b3cc617a req-e43fcafc-9e08-4c4c-b52a-c499591a77a3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updating instance_info_cache with network_info: [{"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:56:01 compute-1 nova_compute[230010]: 2025-11-24 09:56:01.997 230014 DEBUG oslo_concurrency.lockutils [req-53c9d336-2387-470f-bea6-87d4b3cc617a req-e43fcafc-9e08-4c4c-b52a-c499591a77a3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:56:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:02.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.352 230014 DEBUG oslo_concurrency.lockutils [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "interface-8e009e75-a97b-4c5d-a470-5db1137cb407-faa80fbe-f017-47cd-96c8-ca0747a39410" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.353 230014 DEBUG oslo_concurrency.lockutils [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "interface-8e009e75-a97b-4c5d-a470-5db1137cb407-faa80fbe-f017-47cd-96c8-ca0747a39410" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.367 230014 DEBUG nova.objects.instance [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'flavor' on Instance uuid 8e009e75-a97b-4c5d-a470-5db1137cb407 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.388 230014 DEBUG nova.virt.libvirt.vif [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:55:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1469091475',display_name='tempest-TestNetworkBasicOps-server-1469091475',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1469091475',id=3,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZZQMjyTSwgkfidZfPhBPgBcV62YzWExHHXsl1BnsLfJjAX1c531QA8puLkgpD93eEa7lPae/Gh1kFnVkWZAW6FTPgZg7BzeD7RovkQcC7HReAVJUg962qa1kvY0rkgvg==',key_name='tempest-TestNetworkBasicOps-853741544',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:55:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-mfq37y5p',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:55:27Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=8e009e75-a97b-4c5d-a470-5db1137cb407,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.388 230014 DEBUG nova.network.os_vif_util [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.389 230014 DEBUG nova.network.os_vif_util [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:de:6c,bridge_name='br-int',has_traffic_filtering=True,id=faa80fbe-f017-47cd-96c8-ca0747a39410,network=Network(2dfea9d1-73f4-435f-ade1-dce53efe0c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaa80fbe-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.393 230014 DEBUG nova.virt.libvirt.guest [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:23:de:6c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfaa80fbe-f0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.395 230014 DEBUG nova.virt.libvirt.guest [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:23:de:6c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfaa80fbe-f0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.397 230014 DEBUG nova.virt.libvirt.driver [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Attempting to detach device tapfaa80fbe-f0 from instance 8e009e75-a97b-4c5d-a470-5db1137cb407 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.397 230014 DEBUG nova.virt.libvirt.guest [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] detach device xml: <interface type="ethernet">
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <mac address="fa:16:3e:23:de:6c"/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <model type="virtio"/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <driver name="vhost" rx_queue_size="512"/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <mtu size="1442"/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <target dev="tapfaa80fbe-f0"/>
Nov 24 09:56:02 compute-1 nova_compute[230010]: </interface>
Nov 24 09:56:02 compute-1 nova_compute[230010]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.405 230014 DEBUG nova.virt.libvirt.guest [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:23:de:6c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfaa80fbe-f0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.409 230014 DEBUG nova.virt.libvirt.guest [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:23:de:6c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfaa80fbe-f0"/></interface>not found in domain: <domain type='kvm' id='2'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <name>instance-00000003</name>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <uuid>8e009e75-a97b-4c5d-a470-5db1137cb407</uuid>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <metadata>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <nova:name>tempest-TestNetworkBasicOps-server-1469091475</nova:name>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <nova:creationTime>2025-11-24 09:56:00</nova:creationTime>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <nova:flavor name="m1.nano">
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:memory>128</nova:memory>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:disk>1</nova:disk>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:swap>0</nova:swap>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:ephemeral>0</nova:ephemeral>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:vcpus>1</nova:vcpus>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </nova:flavor>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <nova:owner>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </nova:owner>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <nova:ports>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:port uuid="e962e27f-80bf-4103-98ae-d8af84c6fc28">
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </nova:port>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:port uuid="faa80fbe-f017-47cd-96c8-ca0747a39410">
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </nova:port>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </nova:ports>
Nov 24 09:56:02 compute-1 nova_compute[230010]: </nova:instance>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </metadata>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <memory unit='KiB'>131072</memory>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <vcpu placement='static'>1</vcpu>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <resource>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <partition>/machine</partition>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </resource>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <sysinfo type='smbios'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <system>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <entry name='manufacturer'>RDO</entry>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <entry name='product'>OpenStack Compute</entry>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <entry name='serial'>8e009e75-a97b-4c5d-a470-5db1137cb407</entry>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <entry name='uuid'>8e009e75-a97b-4c5d-a470-5db1137cb407</entry>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <entry name='family'>Virtual Machine</entry>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </system>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </sysinfo>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <os>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <boot dev='hd'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <smbios mode='sysinfo'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </os>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <features>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <acpi/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <apic/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <vmcoreinfo state='on'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </features>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <cpu mode='custom' match='exact' check='full'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <vendor>AMD</vendor>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='x2apic'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='tsc-deadline'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='hypervisor'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='tsc_adjust'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='spec-ctrl'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='stibp'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='ssbd'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='cmp_legacy'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='overflow-recov'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='succor'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='ibrs'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='amd-ssbd'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='virt-ssbd'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='disable' name='lbrv'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='disable' name='tsc-scale'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='disable' name='vmcb-clean'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='disable' name='flushbyasid'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='disable' name='pause-filter'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='disable' name='pfthreshold'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='disable' name='xsaves'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='disable' name='svm'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='topoext'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='disable' name='npt'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='disable' name='nrip-save'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </cpu>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <clock offset='utc'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <timer name='pit' tickpolicy='delay'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <timer name='hpet' present='no'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </clock>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <on_poweroff>destroy</on_poweroff>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <on_reboot>restart</on_reboot>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <on_crash>destroy</on_crash>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <devices>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <disk type='network' device='disk'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <driver name='qemu' type='raw' cache='none'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <auth username='openstack'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:         <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       </auth>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <source protocol='rbd' name='vms/8e009e75-a97b-4c5d-a470-5db1137cb407_disk' index='2'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:         <host name='192.168.122.100' port='6789'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:         <host name='192.168.122.102' port='6789'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:         <host name='192.168.122.101' port='6789'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       </source>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target dev='vda' bus='virtio'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='virtio-disk0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </disk>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <disk type='network' device='cdrom'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <driver name='qemu' type='raw' cache='none'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <auth username='openstack'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:         <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       </auth>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <source protocol='rbd' name='vms/8e009e75-a97b-4c5d-a470-5db1137cb407_disk.config' index='1'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:         <host name='192.168.122.100' port='6789'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:         <host name='192.168.122.102' port='6789'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:         <host name='192.168.122.101' port='6789'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       </source>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target dev='sda' bus='sata'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <readonly/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='sata0-0-0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </disk>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='0' model='pcie-root'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pcie.0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='1' port='0x10'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.1'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='2' port='0x11'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.2'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='3' port='0x12'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.3'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='4' port='0x13'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.4'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='5' port='0x14'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.5'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='6' port='0x15'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.6'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='7' port='0x16'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.7'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='8' port='0x17'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.8'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='9' port='0x18'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.9'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='10' port='0x19'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.10'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='11' port='0x1a'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.11'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='12' port='0x1b'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.12'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='13' port='0x1c'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.13'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='14' port='0x1d'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.14'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='15' port='0x1e'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.15'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='16' port='0x1f'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.16'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='17' port='0x20'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.17'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='18' port='0x21'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.18'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='19' port='0x22'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.19'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='20' port='0x23'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.20'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='21' port='0x24'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.21'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='22' port='0x25'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.22'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='23' port='0x26'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.23'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='24' port='0x27'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.24'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='25' port='0x28'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.25'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-pci-bridge'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.26'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='usb'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='sata' index='0'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='ide'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <interface type='ethernet'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <mac address='fa:16:3e:a4:f1:71'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target dev='tape962e27f-80'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model type='virtio'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <driver name='vhost' rx_queue_size='512'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <mtu size='1442'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='net0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </interface>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <interface type='ethernet'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <mac address='fa:16:3e:23:de:6c'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target dev='tapfaa80fbe-f0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model type='virtio'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <driver name='vhost' rx_queue_size='512'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <mtu size='1442'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='net1'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </interface>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <serial type='pty'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <source path='/dev/pts/0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <log file='/var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/console.log' append='off'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target type='isa-serial' port='0'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:         <model name='isa-serial'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       </target>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='serial0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </serial>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <console type='pty' tty='/dev/pts/0'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <source path='/dev/pts/0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <log file='/var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/console.log' append='off'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target type='serial' port='0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='serial0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </console>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <input type='tablet' bus='usb'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='input0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='usb' bus='0' port='1'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </input>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <input type='mouse' bus='ps2'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='input1'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </input>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <input type='keyboard' bus='ps2'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='input2'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </input>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <listen type='address' address='::0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </graphics>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <audio id='1' type='none'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <video>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model type='virtio' heads='1' primary='yes'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='video0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </video>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <watchdog model='itco' action='reset'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='watchdog0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </watchdog>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <memballoon model='virtio'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <stats period='10'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='balloon0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </memballoon>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <rng model='virtio'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <backend model='random'>/dev/urandom</backend>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='rng0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </rng>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </devices>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <label>system_u:system_r:svirt_t:s0:c684,c1005</label>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c684,c1005</imagelabel>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </seclabel>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <label>+107:+107</label>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <imagelabel>+107:+107</imagelabel>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </seclabel>
Nov 24 09:56:02 compute-1 nova_compute[230010]: </domain>
Nov 24 09:56:02 compute-1 nova_compute[230010]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.410 230014 INFO nova.virt.libvirt.driver [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully detached device tapfaa80fbe-f0 from instance 8e009e75-a97b-4c5d-a470-5db1137cb407 from the persistent domain config.
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.411 230014 DEBUG nova.virt.libvirt.driver [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] (1/8): Attempting to detach device tapfaa80fbe-f0 with device alias net1 from instance 8e009e75-a97b-4c5d-a470-5db1137cb407 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.411 230014 DEBUG nova.virt.libvirt.guest [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] detach device xml: <interface type="ethernet">
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <mac address="fa:16:3e:23:de:6c"/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <model type="virtio"/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <driver name="vhost" rx_queue_size="512"/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <mtu size="1442"/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <target dev="tapfaa80fbe-f0"/>
Nov 24 09:56:02 compute-1 nova_compute[230010]: </interface>
Nov 24 09:56:02 compute-1 nova_compute[230010]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 24 09:56:02 compute-1 kernel: tapfaa80fbe-f0 (unregistering): left promiscuous mode
Nov 24 09:56:02 compute-1 NetworkManager[48870]: <info>  [1763978162.4568] device (tapfaa80fbe-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 09:56:02 compute-1 ovn_controller[132966]: 2025-11-24T09:56:02Z|00048|binding|INFO|Releasing lport faa80fbe-f017-47cd-96c8-ca0747a39410 from this chassis (sb_readonly=0)
Nov 24 09:56:02 compute-1 ovn_controller[132966]: 2025-11-24T09:56:02Z|00049|binding|INFO|Setting lport faa80fbe-f017-47cd-96c8-ca0747a39410 down in Southbound
Nov 24 09:56:02 compute-1 ovn_controller[132966]: 2025-11-24T09:56:02Z|00050|binding|INFO|Removing iface tapfaa80fbe-f0 ovn-installed in OVS
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.463 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.469 230014 DEBUG nova.virt.libvirt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Received event <DeviceRemovedEvent: 1763978162.469293, 8e009e75-a97b-4c5d-a470-5db1137cb407 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 24 09:56:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.469 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:de:6c 10.100.0.29'], port_security=['fa:16:3e:23:de:6c 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': '8e009e75-a97b-4c5d-a470-5db1137cb407', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dfea9d1-73f4-435f-ade1-dce53efe0c39', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '33c3a403-57a0-4b88-8817-f12f4bfc92ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3401f0a-5661-425e-b817-1a9ea0eafa9c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=faa80fbe-f017-47cd-96c8-ca0747a39410) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.470 230014 DEBUG nova.virt.libvirt.driver [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Start waiting for the detach event from libvirt for device tapfaa80fbe-f0 with device alias net1 for instance 8e009e75-a97b-4c5d-a470-5db1137cb407 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.471 230014 DEBUG nova.virt.libvirt.guest [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:23:de:6c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfaa80fbe-f0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 24 09:56:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.471 142336 INFO neutron.agent.ovn.metadata.agent [-] Port faa80fbe-f017-47cd-96c8-ca0747a39410 in datapath 2dfea9d1-73f4-435f-ade1-dce53efe0c39 unbound from our chassis
Nov 24 09:56:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.473 142336 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2dfea9d1-73f4-435f-ade1-dce53efe0c39, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 09:56:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.474 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[60ed0e81-8c8b-4d5e-b285-7d2e428847f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.475 142336 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39 namespace which is not needed anymore
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.475 230014 DEBUG nova.virt.libvirt.guest [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:23:de:6c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfaa80fbe-f0"/></interface>not found in domain: <domain type='kvm' id='2'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <name>instance-00000003</name>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <uuid>8e009e75-a97b-4c5d-a470-5db1137cb407</uuid>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <metadata>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <nova:name>tempest-TestNetworkBasicOps-server-1469091475</nova:name>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <nova:creationTime>2025-11-24 09:56:00</nova:creationTime>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <nova:flavor name="m1.nano">
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:memory>128</nova:memory>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:disk>1</nova:disk>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:swap>0</nova:swap>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:ephemeral>0</nova:ephemeral>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:vcpus>1</nova:vcpus>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </nova:flavor>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <nova:owner>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </nova:owner>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <nova:ports>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:port uuid="e962e27f-80bf-4103-98ae-d8af84c6fc28">
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </nova:port>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:port uuid="faa80fbe-f017-47cd-96c8-ca0747a39410">
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </nova:port>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </nova:ports>
Nov 24 09:56:02 compute-1 nova_compute[230010]: </nova:instance>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </metadata>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <memory unit='KiB'>131072</memory>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <vcpu placement='static'>1</vcpu>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <resource>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <partition>/machine</partition>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </resource>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <sysinfo type='smbios'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <system>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <entry name='manufacturer'>RDO</entry>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <entry name='product'>OpenStack Compute</entry>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <entry name='serial'>8e009e75-a97b-4c5d-a470-5db1137cb407</entry>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <entry name='uuid'>8e009e75-a97b-4c5d-a470-5db1137cb407</entry>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <entry name='family'>Virtual Machine</entry>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </system>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </sysinfo>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <os>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <boot dev='hd'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <smbios mode='sysinfo'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </os>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <features>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <acpi/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <apic/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <vmcoreinfo state='on'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </features>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <cpu mode='custom' match='exact' check='full'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <vendor>AMD</vendor>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='x2apic'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='tsc-deadline'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='hypervisor'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='tsc_adjust'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='spec-ctrl'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='stibp'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='ssbd'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='cmp_legacy'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='overflow-recov'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='succor'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='ibrs'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='amd-ssbd'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='virt-ssbd'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='disable' name='lbrv'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='disable' name='tsc-scale'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='disable' name='vmcb-clean'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='disable' name='flushbyasid'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='disable' name='pause-filter'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='disable' name='pfthreshold'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='disable' name='xsaves'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='disable' name='svm'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='require' name='topoext'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='disable' name='npt'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <feature policy='disable' name='nrip-save'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </cpu>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <clock offset='utc'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <timer name='pit' tickpolicy='delay'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <timer name='hpet' present='no'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </clock>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <on_poweroff>destroy</on_poweroff>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <on_reboot>restart</on_reboot>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <on_crash>destroy</on_crash>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <devices>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <disk type='network' device='disk'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <driver name='qemu' type='raw' cache='none'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <auth username='openstack'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:         <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       </auth>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <source protocol='rbd' name='vms/8e009e75-a97b-4c5d-a470-5db1137cb407_disk' index='2'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:         <host name='192.168.122.100' port='6789'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:         <host name='192.168.122.102' port='6789'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:         <host name='192.168.122.101' port='6789'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       </source>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target dev='vda' bus='virtio'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='virtio-disk0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </disk>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <disk type='network' device='cdrom'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <driver name='qemu' type='raw' cache='none'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <auth username='openstack'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:         <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       </auth>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <source protocol='rbd' name='vms/8e009e75-a97b-4c5d-a470-5db1137cb407_disk.config' index='1'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:         <host name='192.168.122.100' port='6789'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:         <host name='192.168.122.102' port='6789'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:         <host name='192.168.122.101' port='6789'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       </source>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target dev='sda' bus='sata'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <readonly/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='sata0-0-0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </disk>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='0' model='pcie-root'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pcie.0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='1' port='0x10'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.1'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='2' port='0x11'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.2'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='3' port='0x12'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.3'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='4' port='0x13'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.4'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='5' port='0x14'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.5'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='6' port='0x15'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.6'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='7' port='0x16'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.7'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='8' port='0x17'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.8'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='9' port='0x18'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.9'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='10' port='0x19'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.10'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='11' port='0x1a'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.11'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='12' port='0x1b'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.12'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='13' port='0x1c'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.13'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='14' port='0x1d'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.14'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='15' port='0x1e'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.15'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='16' port='0x1f'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.16'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='17' port='0x20'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.17'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='18' port='0x21'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.18'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='19' port='0x22'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.19'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='20' port='0x23'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.20'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='21' port='0x24'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.21'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='22' port='0x25'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.22'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='23' port='0x26'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.23'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='24' port='0x27'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.24'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target chassis='25' port='0x28'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.25'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model name='pcie-pci-bridge'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='pci.26'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='usb'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <controller type='sata' index='0'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='ide'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <interface type='ethernet'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <mac address='fa:16:3e:a4:f1:71'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target dev='tape962e27f-80'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model type='virtio'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <driver name='vhost' rx_queue_size='512'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <mtu size='1442'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='net0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </interface>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <serial type='pty'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <source path='/dev/pts/0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <log file='/var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/console.log' append='off'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target type='isa-serial' port='0'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:         <model name='isa-serial'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       </target>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='serial0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </serial>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <console type='pty' tty='/dev/pts/0'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <source path='/dev/pts/0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <log file='/var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/console.log' append='off'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <target type='serial' port='0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='serial0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </console>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <input type='tablet' bus='usb'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='input0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='usb' bus='0' port='1'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </input>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <input type='mouse' bus='ps2'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='input1'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </input>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <input type='keyboard' bus='ps2'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='input2'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </input>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <listen type='address' address='::0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </graphics>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <audio id='1' type='none'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <video>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <model type='virtio' heads='1' primary='yes'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='video0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </video>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <watchdog model='itco' action='reset'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='watchdog0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </watchdog>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <memballoon model='virtio'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <stats period='10'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='balloon0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </memballoon>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <rng model='virtio'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <backend model='random'>/dev/urandom</backend>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <alias name='rng0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </rng>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </devices>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <label>system_u:system_r:svirt_t:s0:c684,c1005</label>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c684,c1005</imagelabel>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </seclabel>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <label>+107:+107</label>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <imagelabel>+107:+107</imagelabel>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </seclabel>
Nov 24 09:56:02 compute-1 nova_compute[230010]: </domain>
Nov 24 09:56:02 compute-1 nova_compute[230010]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.475 230014 INFO nova.virt.libvirt.driver [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully detached device tapfaa80fbe-f0 from instance 8e009e75-a97b-4c5d-a470-5db1137cb407 from the live domain config.
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.475 230014 DEBUG nova.virt.libvirt.vif [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:55:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1469091475',display_name='tempest-TestNetworkBasicOps-server-1469091475',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1469091475',id=3,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZZQMjyTSwgkfidZfPhBPgBcV62YzWExHHXsl1BnsLfJjAX1c531QA8puLkgpD93eEa7lPae/Gh1kFnVkWZAW6FTPgZg7BzeD7RovkQcC7HReAVJUg962qa1kvY0rkgvg==',key_name='tempest-TestNetworkBasicOps-853741544',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:55:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-mfq37y5p',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:55:27Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=8e009e75-a97b-4c5d-a470-5db1137cb407,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.476 230014 DEBUG nova.network.os_vif_util [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.476 230014 DEBUG nova.network.os_vif_util [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:de:6c,bridge_name='br-int',has_traffic_filtering=True,id=faa80fbe-f017-47cd-96c8-ca0747a39410,network=Network(2dfea9d1-73f4-435f-ade1-dce53efe0c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaa80fbe-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.476 230014 DEBUG os_vif [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:de:6c,bridge_name='br-int',has_traffic_filtering=True,id=faa80fbe-f017-47cd-96c8-ca0747a39410,network=Network(2dfea9d1-73f4-435f-ade1-dce53efe0c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaa80fbe-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.478 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.478 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfaa80fbe-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.480 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.481 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.483 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.486 230014 INFO os_vif [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:de:6c,bridge_name='br-int',has_traffic_filtering=True,id=faa80fbe-f017-47cd-96c8-ca0747a39410,network=Network(2dfea9d1-73f4-435f-ade1-dce53efe0c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaa80fbe-f0')
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.487 230014 DEBUG nova.virt.libvirt.guest [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <nova:name>tempest-TestNetworkBasicOps-server-1469091475</nova:name>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <nova:creationTime>2025-11-24 09:56:02</nova:creationTime>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <nova:flavor name="m1.nano">
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:memory>128</nova:memory>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:disk>1</nova:disk>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:swap>0</nova:swap>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:ephemeral>0</nova:ephemeral>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:vcpus>1</nova:vcpus>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </nova:flavor>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <nova:owner>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </nova:owner>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   <nova:ports>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     <nova:port uuid="e962e27f-80bf-4103-98ae-d8af84c6fc28">
Nov 24 09:56:02 compute-1 nova_compute[230010]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 24 09:56:02 compute-1 nova_compute[230010]:     </nova:port>
Nov 24 09:56:02 compute-1 nova_compute[230010]:   </nova:ports>
Nov 24 09:56:02 compute-1 nova_compute[230010]: </nova:instance>
Nov 24 09:56:02 compute-1 nova_compute[230010]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 24 09:56:02 compute-1 neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39[236121]: [NOTICE]   (236138) : haproxy version is 2.8.14-c23fe91
Nov 24 09:56:02 compute-1 neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39[236121]: [NOTICE]   (236138) : path to executable is /usr/sbin/haproxy
Nov 24 09:56:02 compute-1 neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39[236121]: [WARNING]  (236138) : Exiting Master process...
Nov 24 09:56:02 compute-1 neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39[236121]: [ALERT]    (236138) : Current worker (236144) exited with code 143 (Terminated)
Nov 24 09:56:02 compute-1 neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39[236121]: [WARNING]  (236138) : All workers exited. Exiting... (0)
Nov 24 09:56:02 compute-1 systemd[1]: libpod-2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b.scope: Deactivated successfully.
Nov 24 09:56:02 compute-1 podman[236176]: 2025-11-24 09:56:02.597211634 +0000 UTC m=+0.040366308 container died 2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 24 09:56:02 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b-userdata-shm.mount: Deactivated successfully.
Nov 24 09:56:02 compute-1 systemd[1]: var-lib-containers-storage-overlay-159129d6836993ad22c9e4f7333101e324e380206278a56bdc9a5ccf6ea421e5-merged.mount: Deactivated successfully.
Nov 24 09:56:02 compute-1 podman[236176]: 2025-11-24 09:56:02.641119948 +0000 UTC m=+0.084274622 container cleanup 2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 24 09:56:02 compute-1 systemd[1]: libpod-conmon-2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b.scope: Deactivated successfully.
Nov 24 09:56:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:56:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:02.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:56:02 compute-1 podman[236207]: 2025-11-24 09:56:02.701040163 +0000 UTC m=+0.041126437 container remove 2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 24 09:56:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.706 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[7abea4c7-bb38-443f-bd86-1b5857e2d5cd]: (4, ('Mon Nov 24 09:56:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39 (2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b)\n2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b\nMon Nov 24 09:56:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39 (2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b)\n2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.708 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[c361bbd4-1601-468f-976b-4fbdc4abedcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.709 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2dfea9d1-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.711 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:02 compute-1 kernel: tap2dfea9d1-70: left promiscuous mode
Nov 24 09:56:02 compute-1 nova_compute[230010]: 2025-11-24 09:56:02.725 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.728 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[de644d23-cca5-4586-ab22-46df93e94259]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.740 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[825db17b-c595-4852-bdf0-275b7316905e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.741 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[b0ff2d11-bb76-44ea-8973-8d0455119a9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.754 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[a2f8eec8-83c8-4a3e-8f83-7bd6895c2792]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 410432, 'reachable_time': 39730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 236243, 'error': None, 'target': 'ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:02 compute-1 systemd[1]: run-netns-ovnmeta\x2d2dfea9d1\x2d73f4\x2d435f\x2dade1\x2ddce53efe0c39.mount: Deactivated successfully.
Nov 24 09:56:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.758 142476 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 24 09:56:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.758 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[d54bc886-5e7b-4d81-9c4c-d2b93d271faf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:02 compute-1 sudo[236220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:56:02 compute-1 sudo[236220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:56:02 compute-1 sudo[236220]: pam_unix(sudo:session): session closed for user root
Nov 24 09:56:02 compute-1 sudo[236248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:56:02 compute-1 sudo[236248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:56:03 compute-1 ceph-mon[80009]: pgmap v874: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 3.0 KiB/s wr, 1 op/s
Nov 24 09:56:03 compute-1 sudo[236248]: pam_unix(sudo:session): session closed for user root
Nov 24 09:56:03 compute-1 nova_compute[230010]: 2025-11-24 09:56:03.426 230014 DEBUG nova.compute.manager [req-88adbbf0-ae7a-4727-b6f3-ffe3bf466b8d req-05d96d72-5082-4dd7-a497-1fbd525a43b5 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-vif-plugged-faa80fbe-f017-47cd-96c8-ca0747a39410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:56:03 compute-1 nova_compute[230010]: 2025-11-24 09:56:03.427 230014 DEBUG oslo_concurrency.lockutils [req-88adbbf0-ae7a-4727-b6f3-ffe3bf466b8d req-05d96d72-5082-4dd7-a497-1fbd525a43b5 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:56:03 compute-1 nova_compute[230010]: 2025-11-24 09:56:03.427 230014 DEBUG oslo_concurrency.lockutils [req-88adbbf0-ae7a-4727-b6f3-ffe3bf466b8d req-05d96d72-5082-4dd7-a497-1fbd525a43b5 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:56:03 compute-1 nova_compute[230010]: 2025-11-24 09:56:03.428 230014 DEBUG oslo_concurrency.lockutils [req-88adbbf0-ae7a-4727-b6f3-ffe3bf466b8d req-05d96d72-5082-4dd7-a497-1fbd525a43b5 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:56:03 compute-1 nova_compute[230010]: 2025-11-24 09:56:03.428 230014 DEBUG nova.compute.manager [req-88adbbf0-ae7a-4727-b6f3-ffe3bf466b8d req-05d96d72-5082-4dd7-a497-1fbd525a43b5 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] No waiting events found dispatching network-vif-plugged-faa80fbe-f017-47cd-96c8-ca0747a39410 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 09:56:03 compute-1 nova_compute[230010]: 2025-11-24 09:56:03.428 230014 WARNING nova.compute.manager [req-88adbbf0-ae7a-4727-b6f3-ffe3bf466b8d req-05d96d72-5082-4dd7-a497-1fbd525a43b5 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received unexpected event network-vif-plugged-faa80fbe-f017-47cd-96c8-ca0747a39410 for instance with vm_state active and task_state None.
Nov 24 09:56:03 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:56:03 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:56:03 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:56:03 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:56:03 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:56:03 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:56:03 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:56:03 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:56:03 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:56:03 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:56:03 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:56:03 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:56:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:04.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:04 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:56:04 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:56:04 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:56:04 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:56:04 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:56:04 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:56:04 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:56:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:56:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:04.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:05 compute-1 ceph-mon[80009]: pgmap v875: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 3.2 KiB/s wr, 1 op/s
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.316 230014 DEBUG oslo_concurrency.lockutils [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.316 230014 DEBUG oslo_concurrency.lockutils [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.316 230014 DEBUG nova.network.neutron [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.522 230014 DEBUG nova.compute.manager [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-vif-unplugged-faa80fbe-f017-47cd-96c8-ca0747a39410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.523 230014 DEBUG oslo_concurrency.lockutils [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.523 230014 DEBUG oslo_concurrency.lockutils [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.523 230014 DEBUG oslo_concurrency.lockutils [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.524 230014 DEBUG nova.compute.manager [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] No waiting events found dispatching network-vif-unplugged-faa80fbe-f017-47cd-96c8-ca0747a39410 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.524 230014 WARNING nova.compute.manager [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received unexpected event network-vif-unplugged-faa80fbe-f017-47cd-96c8-ca0747a39410 for instance with vm_state active and task_state None.
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.524 230014 DEBUG nova.compute.manager [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-vif-plugged-faa80fbe-f017-47cd-96c8-ca0747a39410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.524 230014 DEBUG oslo_concurrency.lockutils [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.525 230014 DEBUG oslo_concurrency.lockutils [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.525 230014 DEBUG oslo_concurrency.lockutils [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.525 230014 DEBUG nova.compute.manager [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] No waiting events found dispatching network-vif-plugged-faa80fbe-f017-47cd-96c8-ca0747a39410 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.525 230014 WARNING nova.compute.manager [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received unexpected event network-vif-plugged-faa80fbe-f017-47cd-96c8-ca0747a39410 for instance with vm_state active and task_state None.
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.526 230014 DEBUG nova.compute.manager [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-vif-deleted-faa80fbe-f017-47cd-96c8-ca0747a39410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.526 230014 INFO nova.compute.manager [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Neutron deleted interface faa80fbe-f017-47cd-96c8-ca0747a39410; detaching it from the instance and deleting it from the info cache
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.526 230014 DEBUG nova.network.neutron [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updating instance_info_cache with network_info: [{"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.544 230014 DEBUG nova.objects.instance [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lazy-loading 'system_metadata' on Instance uuid 8e009e75-a97b-4c5d-a470-5db1137cb407 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.586 230014 DEBUG nova.objects.instance [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lazy-loading 'flavor' on Instance uuid 8e009e75-a97b-4c5d-a470-5db1137cb407 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.623 230014 DEBUG nova.virt.libvirt.vif [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:55:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1469091475',display_name='tempest-TestNetworkBasicOps-server-1469091475',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1469091475',id=3,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZZQMjyTSwgkfidZfPhBPgBcV62YzWExHHXsl1BnsLfJjAX1c531QA8puLkgpD93eEa7lPae/Gh1kFnVkWZAW6FTPgZg7BzeD7RovkQcC7HReAVJUg962qa1kvY0rkgvg==',key_name='tempest-TestNetworkBasicOps-853741544',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:55:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-mfq37y5p',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:55:27Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=8e009e75-a97b-4c5d-a470-5db1137cb407,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.624 230014 DEBUG nova.network.os_vif_util [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Converting VIF {"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.624 230014 DEBUG nova.network.os_vif_util [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:de:6c,bridge_name='br-int',has_traffic_filtering=True,id=faa80fbe-f017-47cd-96c8-ca0747a39410,network=Network(2dfea9d1-73f4-435f-ade1-dce53efe0c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaa80fbe-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.627 230014 DEBUG nova.virt.libvirt.guest [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:23:de:6c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfaa80fbe-f0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.630 230014 DEBUG nova.virt.libvirt.guest [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:23:de:6c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfaa80fbe-f0"/></interface>not found in domain: <domain type='kvm' id='2'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <name>instance-00000003</name>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <uuid>8e009e75-a97b-4c5d-a470-5db1137cb407</uuid>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <metadata>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <nova:name>tempest-TestNetworkBasicOps-server-1469091475</nova:name>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <nova:creationTime>2025-11-24 09:56:02</nova:creationTime>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <nova:flavor name="m1.nano">
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:memory>128</nova:memory>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:disk>1</nova:disk>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:swap>0</nova:swap>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:ephemeral>0</nova:ephemeral>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:vcpus>1</nova:vcpus>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </nova:flavor>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <nova:owner>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </nova:owner>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <nova:ports>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:port uuid="e962e27f-80bf-4103-98ae-d8af84c6fc28">
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </nova:port>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </nova:ports>
Nov 24 09:56:05 compute-1 nova_compute[230010]: </nova:instance>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </metadata>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <memory unit='KiB'>131072</memory>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <vcpu placement='static'>1</vcpu>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <resource>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <partition>/machine</partition>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </resource>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <sysinfo type='smbios'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <system>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <entry name='manufacturer'>RDO</entry>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <entry name='product'>OpenStack Compute</entry>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <entry name='serial'>8e009e75-a97b-4c5d-a470-5db1137cb407</entry>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <entry name='uuid'>8e009e75-a97b-4c5d-a470-5db1137cb407</entry>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <entry name='family'>Virtual Machine</entry>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </system>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </sysinfo>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <os>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <boot dev='hd'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <smbios mode='sysinfo'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </os>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <features>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <acpi/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <apic/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <vmcoreinfo state='on'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </features>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <cpu mode='custom' match='exact' check='full'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <vendor>AMD</vendor>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='x2apic'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='tsc-deadline'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='hypervisor'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='tsc_adjust'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='spec-ctrl'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='stibp'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='ssbd'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='cmp_legacy'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='overflow-recov'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='succor'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='ibrs'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='amd-ssbd'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='virt-ssbd'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='disable' name='lbrv'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='disable' name='tsc-scale'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='disable' name='vmcb-clean'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='disable' name='flushbyasid'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='disable' name='pause-filter'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='disable' name='pfthreshold'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='disable' name='xsaves'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='disable' name='svm'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='topoext'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='disable' name='npt'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='disable' name='nrip-save'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </cpu>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <clock offset='utc'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <timer name='pit' tickpolicy='delay'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <timer name='hpet' present='no'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </clock>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <on_poweroff>destroy</on_poweroff>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <on_reboot>restart</on_reboot>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <on_crash>destroy</on_crash>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <devices>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <disk type='network' device='disk'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <driver name='qemu' type='raw' cache='none'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <auth username='openstack'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:         <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       </auth>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <source protocol='rbd' name='vms/8e009e75-a97b-4c5d-a470-5db1137cb407_disk' index='2'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:         <host name='192.168.122.100' port='6789'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:         <host name='192.168.122.102' port='6789'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:         <host name='192.168.122.101' port='6789'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       </source>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target dev='vda' bus='virtio'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='virtio-disk0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </disk>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <disk type='network' device='cdrom'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <driver name='qemu' type='raw' cache='none'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <auth username='openstack'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:         <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       </auth>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <source protocol='rbd' name='vms/8e009e75-a97b-4c5d-a470-5db1137cb407_disk.config' index='1'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:         <host name='192.168.122.100' port='6789'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:         <host name='192.168.122.102' port='6789'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:         <host name='192.168.122.101' port='6789'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       </source>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target dev='sda' bus='sata'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <readonly/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='sata0-0-0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </disk>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='0' model='pcie-root'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pcie.0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='1' port='0x10'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.1'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='2' port='0x11'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.2'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='3' port='0x12'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.3'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='4' port='0x13'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.4'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='5' port='0x14'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.5'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='6' port='0x15'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.6'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='7' port='0x16'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.7'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='8' port='0x17'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.8'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='9' port='0x18'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.9'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='10' port='0x19'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.10'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='11' port='0x1a'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.11'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='12' port='0x1b'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.12'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='13' port='0x1c'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.13'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='14' port='0x1d'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.14'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='15' port='0x1e'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.15'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='16' port='0x1f'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.16'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='17' port='0x20'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.17'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='18' port='0x21'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.18'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='19' port='0x22'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.19'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='20' port='0x23'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.20'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='21' port='0x24'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.21'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='22' port='0x25'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.22'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='23' port='0x26'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.23'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='24' port='0x27'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.24'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='25' port='0x28'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.25'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-pci-bridge'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.26'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='usb'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='sata' index='0'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='ide'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <interface type='ethernet'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <mac address='fa:16:3e:a4:f1:71'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target dev='tape962e27f-80'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model type='virtio'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <driver name='vhost' rx_queue_size='512'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <mtu size='1442'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='net0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </interface>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <serial type='pty'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <source path='/dev/pts/0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <log file='/var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/console.log' append='off'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target type='isa-serial' port='0'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:         <model name='isa-serial'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       </target>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='serial0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </serial>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <console type='pty' tty='/dev/pts/0'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <source path='/dev/pts/0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <log file='/var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/console.log' append='off'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target type='serial' port='0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='serial0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </console>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <input type='tablet' bus='usb'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='input0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='usb' bus='0' port='1'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </input>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <input type='mouse' bus='ps2'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='input1'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </input>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <input type='keyboard' bus='ps2'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='input2'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </input>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <listen type='address' address='::0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </graphics>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <audio id='1' type='none'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <video>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model type='virtio' heads='1' primary='yes'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='video0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </video>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <watchdog model='itco' action='reset'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='watchdog0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </watchdog>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <memballoon model='virtio'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <stats period='10'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='balloon0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </memballoon>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <rng model='virtio'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <backend model='random'>/dev/urandom</backend>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='rng0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </rng>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </devices>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <label>system_u:system_r:svirt_t:s0:c684,c1005</label>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c684,c1005</imagelabel>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </seclabel>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <label>+107:+107</label>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <imagelabel>+107:+107</imagelabel>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </seclabel>
Nov 24 09:56:05 compute-1 nova_compute[230010]: </domain>
Nov 24 09:56:05 compute-1 nova_compute[230010]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.632 230014 DEBUG nova.virt.libvirt.guest [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:23:de:6c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfaa80fbe-f0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.634 230014 DEBUG nova.virt.libvirt.guest [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:23:de:6c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfaa80fbe-f0"/></interface>not found in domain: <domain type='kvm' id='2'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <name>instance-00000003</name>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <uuid>8e009e75-a97b-4c5d-a470-5db1137cb407</uuid>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <metadata>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <nova:name>tempest-TestNetworkBasicOps-server-1469091475</nova:name>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <nova:creationTime>2025-11-24 09:56:02</nova:creationTime>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <nova:flavor name="m1.nano">
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:memory>128</nova:memory>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:disk>1</nova:disk>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:swap>0</nova:swap>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:ephemeral>0</nova:ephemeral>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:vcpus>1</nova:vcpus>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </nova:flavor>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <nova:owner>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </nova:owner>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <nova:ports>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:port uuid="e962e27f-80bf-4103-98ae-d8af84c6fc28">
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </nova:port>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </nova:ports>
Nov 24 09:56:05 compute-1 nova_compute[230010]: </nova:instance>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </metadata>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <memory unit='KiB'>131072</memory>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <vcpu placement='static'>1</vcpu>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <resource>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <partition>/machine</partition>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </resource>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <sysinfo type='smbios'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <system>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <entry name='manufacturer'>RDO</entry>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <entry name='product'>OpenStack Compute</entry>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <entry name='serial'>8e009e75-a97b-4c5d-a470-5db1137cb407</entry>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <entry name='uuid'>8e009e75-a97b-4c5d-a470-5db1137cb407</entry>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <entry name='family'>Virtual Machine</entry>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </system>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </sysinfo>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <os>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <boot dev='hd'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <smbios mode='sysinfo'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </os>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <features>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <acpi/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <apic/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <vmcoreinfo state='on'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </features>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <cpu mode='custom' match='exact' check='full'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <vendor>AMD</vendor>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='x2apic'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='tsc-deadline'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='hypervisor'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='tsc_adjust'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='spec-ctrl'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='stibp'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='ssbd'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='cmp_legacy'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='overflow-recov'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='succor'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='ibrs'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='amd-ssbd'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='virt-ssbd'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='disable' name='lbrv'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='disable' name='tsc-scale'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='disable' name='vmcb-clean'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='disable' name='flushbyasid'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='disable' name='pause-filter'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='disable' name='pfthreshold'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='disable' name='xsaves'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='disable' name='svm'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='require' name='topoext'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='disable' name='npt'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <feature policy='disable' name='nrip-save'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </cpu>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <clock offset='utc'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <timer name='pit' tickpolicy='delay'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <timer name='hpet' present='no'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </clock>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <on_poweroff>destroy</on_poweroff>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <on_reboot>restart</on_reboot>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <on_crash>destroy</on_crash>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <devices>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <disk type='network' device='disk'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <driver name='qemu' type='raw' cache='none'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <auth username='openstack'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:         <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       </auth>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <source protocol='rbd' name='vms/8e009e75-a97b-4c5d-a470-5db1137cb407_disk' index='2'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:         <host name='192.168.122.100' port='6789'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:         <host name='192.168.122.102' port='6789'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:         <host name='192.168.122.101' port='6789'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       </source>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target dev='vda' bus='virtio'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='virtio-disk0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </disk>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <disk type='network' device='cdrom'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <driver name='qemu' type='raw' cache='none'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <auth username='openstack'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:         <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       </auth>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <source protocol='rbd' name='vms/8e009e75-a97b-4c5d-a470-5db1137cb407_disk.config' index='1'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:         <host name='192.168.122.100' port='6789'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:         <host name='192.168.122.102' port='6789'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:         <host name='192.168.122.101' port='6789'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       </source>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target dev='sda' bus='sata'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <readonly/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='sata0-0-0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </disk>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='0' model='pcie-root'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pcie.0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='1' port='0x10'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.1'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='2' port='0x11'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.2'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='3' port='0x12'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.3'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='4' port='0x13'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.4'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='5' port='0x14'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.5'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='6' port='0x15'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.6'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='7' port='0x16'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.7'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='8' port='0x17'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.8'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='9' port='0x18'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.9'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='10' port='0x19'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.10'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='11' port='0x1a'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.11'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='12' port='0x1b'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.12'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='13' port='0x1c'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.13'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='14' port='0x1d'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.14'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='15' port='0x1e'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.15'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='16' port='0x1f'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.16'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='17' port='0x20'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.17'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='18' port='0x21'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.18'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='19' port='0x22'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.19'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='20' port='0x23'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.20'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='21' port='0x24'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.21'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='22' port='0x25'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.22'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='23' port='0x26'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.23'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='24' port='0x27'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.24'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target chassis='25' port='0x28'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.25'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model name='pcie-pci-bridge'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='pci.26'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='usb'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <controller type='sata' index='0'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='ide'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <interface type='ethernet'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <mac address='fa:16:3e:a4:f1:71'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target dev='tape962e27f-80'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model type='virtio'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <driver name='vhost' rx_queue_size='512'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <mtu size='1442'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='net0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </interface>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <serial type='pty'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <source path='/dev/pts/0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <log file='/var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/console.log' append='off'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target type='isa-serial' port='0'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:         <model name='isa-serial'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       </target>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='serial0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </serial>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <console type='pty' tty='/dev/pts/0'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <source path='/dev/pts/0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <log file='/var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/console.log' append='off'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <target type='serial' port='0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='serial0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </console>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <input type='tablet' bus='usb'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='input0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='usb' bus='0' port='1'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </input>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <input type='mouse' bus='ps2'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='input1'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </input>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <input type='keyboard' bus='ps2'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='input2'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </input>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <listen type='address' address='::0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </graphics>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <audio id='1' type='none'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <video>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <model type='virtio' heads='1' primary='yes'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='video0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </video>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <watchdog model='itco' action='reset'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='watchdog0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </watchdog>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <memballoon model='virtio'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <stats period='10'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='balloon0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </memballoon>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <rng model='virtio'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <backend model='random'>/dev/urandom</backend>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <alias name='rng0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </rng>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </devices>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <label>system_u:system_r:svirt_t:s0:c684,c1005</label>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c684,c1005</imagelabel>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </seclabel>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <label>+107:+107</label>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <imagelabel>+107:+107</imagelabel>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </seclabel>
Nov 24 09:56:05 compute-1 nova_compute[230010]: </domain>
Nov 24 09:56:05 compute-1 nova_compute[230010]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.636 230014 WARNING nova.virt.libvirt.driver [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Detaching interface fa:16:3e:23:de:6c failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapfaa80fbe-f0' not found.
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.637 230014 DEBUG nova.virt.libvirt.vif [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:55:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1469091475',display_name='tempest-TestNetworkBasicOps-server-1469091475',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1469091475',id=3,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZZQMjyTSwgkfidZfPhBPgBcV62YzWExHHXsl1BnsLfJjAX1c531QA8puLkgpD93eEa7lPae/Gh1kFnVkWZAW6FTPgZg7BzeD7RovkQcC7HReAVJUg962qa1kvY0rkgvg==',key_name='tempest-TestNetworkBasicOps-853741544',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:55:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-mfq37y5p',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:55:27Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=8e009e75-a97b-4c5d-a470-5db1137cb407,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.638 230014 DEBUG nova.network.os_vif_util [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Converting VIF {"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.638 230014 DEBUG nova.network.os_vif_util [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:de:6c,bridge_name='br-int',has_traffic_filtering=True,id=faa80fbe-f017-47cd-96c8-ca0747a39410,network=Network(2dfea9d1-73f4-435f-ade1-dce53efe0c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaa80fbe-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.639 230014 DEBUG os_vif [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:de:6c,bridge_name='br-int',has_traffic_filtering=True,id=faa80fbe-f017-47cd-96c8-ca0747a39410,network=Network(2dfea9d1-73f4-435f-ade1-dce53efe0c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaa80fbe-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.641 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.641 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfaa80fbe-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.641 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.644 230014 INFO os_vif [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:de:6c,bridge_name='br-int',has_traffic_filtering=True,id=faa80fbe-f017-47cd-96c8-ca0747a39410,network=Network(2dfea9d1-73f4-435f-ade1-dce53efe0c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaa80fbe-f0')
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.645 230014 DEBUG nova.virt.libvirt.guest [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <nova:name>tempest-TestNetworkBasicOps-server-1469091475</nova:name>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <nova:creationTime>2025-11-24 09:56:05</nova:creationTime>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <nova:flavor name="m1.nano">
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:memory>128</nova:memory>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:disk>1</nova:disk>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:swap>0</nova:swap>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:ephemeral>0</nova:ephemeral>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:vcpus>1</nova:vcpus>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </nova:flavor>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <nova:owner>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </nova:owner>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   <nova:ports>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     <nova:port uuid="e962e27f-80bf-4103-98ae-d8af84c6fc28">
Nov 24 09:56:05 compute-1 nova_compute[230010]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 24 09:56:05 compute-1 nova_compute[230010]:     </nova:port>
Nov 24 09:56:05 compute-1 nova_compute[230010]:   </nova:ports>
Nov 24 09:56:05 compute-1 nova_compute[230010]: </nova:instance>
Nov 24 09:56:05 compute-1 nova_compute[230010]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 24 09:56:05 compute-1 nova_compute[230010]: 2025-11-24 09:56:05.771 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:56:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:06.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:56:06 compute-1 podman[236306]: 2025-11-24 09:56:06.340347251 +0000 UTC m=+0.083465572 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 24 09:56:06 compute-1 sudo[236333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:56:06 compute-1 sudo[236333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:56:06 compute-1 sudo[236333]: pam_unix(sudo:session): session closed for user root
Nov 24 09:56:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:06.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:07 compute-1 ceph-mon[80009]: pgmap v876: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 3.2 KiB/s wr, 1 op/s
Nov 24 09:56:07 compute-1 nova_compute[230010]: 2025-11-24 09:56:07.481 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:07 compute-1 ovn_controller[132966]: 2025-11-24T09:56:07Z|00051|binding|INFO|Releasing lport 9bf2a93f-cf2b-4180-87cd-4fecaf4abe0b from this chassis (sb_readonly=0)
Nov 24 09:56:07 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:56:07 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:56:07 compute-1 nova_compute[230010]: 2025-11-24 09:56:07.758 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:07 compute-1 sudo[236358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:56:07 compute-1 sudo[236358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:56:07 compute-1 sudo[236358]: pam_unix(sudo:session): session closed for user root
Nov 24 09:56:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:08.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:08 compute-1 nova_compute[230010]: 2025-11-24 09:56:08.316 230014 INFO nova.network.neutron [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Port faa80fbe-f017-47cd-96c8-ca0747a39410 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 24 09:56:08 compute-1 nova_compute[230010]: 2025-11-24 09:56:08.316 230014 DEBUG nova.network.neutron [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updating instance_info_cache with network_info: [{"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:56:08 compute-1 nova_compute[230010]: 2025-11-24 09:56:08.334 230014 DEBUG oslo_concurrency.lockutils [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:56:08 compute-1 nova_compute[230010]: 2025-11-24 09:56:08.350 230014 DEBUG oslo_concurrency.lockutils [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "interface-8e009e75-a97b-4c5d-a470-5db1137cb407-faa80fbe-f017-47cd-96c8-ca0747a39410" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.997s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:56:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:56:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:08.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:56:08 compute-1 ceph-mon[80009]: pgmap v877: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 3.2 KiB/s wr, 2 op/s
Nov 24 09:56:08 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:56:08 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:56:08 compute-1 nova_compute[230010]: 2025-11-24 09:56:08.781 230014 DEBUG nova.compute.manager [req-49bc461f-a8bb-4dff-8cfe-08d8b377f561 req-20fdecd2-0244-4b6a-a5d5-099475744c71 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-changed-e962e27f-80bf-4103-98ae-d8af84c6fc28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:56:08 compute-1 nova_compute[230010]: 2025-11-24 09:56:08.782 230014 DEBUG nova.compute.manager [req-49bc461f-a8bb-4dff-8cfe-08d8b377f561 req-20fdecd2-0244-4b6a-a5d5-099475744c71 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Refreshing instance network info cache due to event network-changed-e962e27f-80bf-4103-98ae-d8af84c6fc28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 09:56:08 compute-1 nova_compute[230010]: 2025-11-24 09:56:08.782 230014 DEBUG oslo_concurrency.lockutils [req-49bc461f-a8bb-4dff-8cfe-08d8b377f561 req-20fdecd2-0244-4b6a-a5d5-099475744c71 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:56:08 compute-1 nova_compute[230010]: 2025-11-24 09:56:08.782 230014 DEBUG oslo_concurrency.lockutils [req-49bc461f-a8bb-4dff-8cfe-08d8b377f561 req-20fdecd2-0244-4b6a-a5d5-099475744c71 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:56:08 compute-1 nova_compute[230010]: 2025-11-24 09:56:08.782 230014 DEBUG nova.network.neutron [req-49bc461f-a8bb-4dff-8cfe-08d8b377f561 req-20fdecd2-0244-4b6a-a5d5-099475744c71 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Refreshing network info cache for port e962e27f-80bf-4103-98ae-d8af84c6fc28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 09:56:08 compute-1 nova_compute[230010]: 2025-11-24 09:56:08.845 230014 DEBUG oslo_concurrency.lockutils [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "8e009e75-a97b-4c5d-a470-5db1137cb407" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:56:08 compute-1 nova_compute[230010]: 2025-11-24 09:56:08.845 230014 DEBUG oslo_concurrency.lockutils [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:56:08 compute-1 nova_compute[230010]: 2025-11-24 09:56:08.846 230014 DEBUG oslo_concurrency.lockutils [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:56:08 compute-1 nova_compute[230010]: 2025-11-24 09:56:08.846 230014 DEBUG oslo_concurrency.lockutils [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:56:08 compute-1 nova_compute[230010]: 2025-11-24 09:56:08.846 230014 DEBUG oslo_concurrency.lockutils [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:56:08 compute-1 nova_compute[230010]: 2025-11-24 09:56:08.847 230014 INFO nova.compute.manager [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Terminating instance
Nov 24 09:56:08 compute-1 nova_compute[230010]: 2025-11-24 09:56:08.848 230014 DEBUG nova.compute.manager [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 09:56:08 compute-1 kernel: tape962e27f-80 (unregistering): left promiscuous mode
Nov 24 09:56:08 compute-1 NetworkManager[48870]: <info>  [1763978168.8897] device (tape962e27f-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 09:56:08 compute-1 nova_compute[230010]: 2025-11-24 09:56:08.935 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:08 compute-1 ovn_controller[132966]: 2025-11-24T09:56:08Z|00052|binding|INFO|Releasing lport e962e27f-80bf-4103-98ae-d8af84c6fc28 from this chassis (sb_readonly=0)
Nov 24 09:56:08 compute-1 ovn_controller[132966]: 2025-11-24T09:56:08Z|00053|binding|INFO|Setting lport e962e27f-80bf-4103-98ae-d8af84c6fc28 down in Southbound
Nov 24 09:56:08 compute-1 ovn_controller[132966]: 2025-11-24T09:56:08Z|00054|binding|INFO|Removing iface tape962e27f-80 ovn-installed in OVS
Nov 24 09:56:08 compute-1 nova_compute[230010]: 2025-11-24 09:56:08.937 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:08 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:08.947 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:f1:71 10.100.0.6'], port_security=['fa:16:3e:a4:f1:71 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8e009e75-a97b-4c5d-a470-5db1137cb407', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-636fec29-e18e-45f1-aabc-369f5fd0d593', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '08677d44-dac1-4cc6-ac2a-f951a8415b1a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cab95497-29d8-4481-acd1-a71d08bb0310, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=e962e27f-80bf-4103-98ae-d8af84c6fc28) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:56:08 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:08.949 142336 INFO neutron.agent.ovn.metadata.agent [-] Port e962e27f-80bf-4103-98ae-d8af84c6fc28 in datapath 636fec29-e18e-45f1-aabc-369f5fd0d593 unbound from our chassis
Nov 24 09:56:08 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:08.950 142336 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 636fec29-e18e-45f1-aabc-369f5fd0d593, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 09:56:08 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:08.951 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[d47bfd85-fcff-4842-bdab-6ba8a3aa665d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:08 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:08.951 142336 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593 namespace which is not needed anymore
Nov 24 09:56:08 compute-1 nova_compute[230010]: 2025-11-24 09:56:08.953 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:08 compute-1 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Deactivated successfully.
Nov 24 09:56:08 compute-1 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Consumed 14.671s CPU time.
Nov 24 09:56:08 compute-1 systemd-machined[193537]: Machine qemu-2-instance-00000003 terminated.
Nov 24 09:56:09 compute-1 neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593[235902]: [NOTICE]   (235909) : haproxy version is 2.8.14-c23fe91
Nov 24 09:56:09 compute-1 neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593[235902]: [NOTICE]   (235909) : path to executable is /usr/sbin/haproxy
Nov 24 09:56:09 compute-1 neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593[235902]: [WARNING]  (235909) : Exiting Master process...
Nov 24 09:56:09 compute-1 neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593[235902]: [ALERT]    (235909) : Current worker (235912) exited with code 143 (Terminated)
Nov 24 09:56:09 compute-1 neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593[235902]: [WARNING]  (235909) : All workers exited. Exiting... (0)
Nov 24 09:56:09 compute-1 systemd[1]: libpod-2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf.scope: Deactivated successfully.
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.077 230014 INFO nova.virt.libvirt.driver [-] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Instance destroyed successfully.
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.077 230014 DEBUG nova.objects.instance [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'resources' on Instance uuid 8e009e75-a97b-4c5d-a470-5db1137cb407 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:56:09 compute-1 podman[236411]: 2025-11-24 09:56:09.079793837 +0000 UTC m=+0.043898654 container died 2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.092 230014 DEBUG nova.virt.libvirt.vif [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:55:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1469091475',display_name='tempest-TestNetworkBasicOps-server-1469091475',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1469091475',id=3,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZZQMjyTSwgkfidZfPhBPgBcV62YzWExHHXsl1BnsLfJjAX1c531QA8puLkgpD93eEa7lPae/Gh1kFnVkWZAW6FTPgZg7BzeD7RovkQcC7HReAVJUg962qa1kvY0rkgvg==',key_name='tempest-TestNetworkBasicOps-853741544',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:55:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-mfq37y5p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:55:27Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=8e009e75-a97b-4c5d-a470-5db1137cb407,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.092 230014 DEBUG nova.network.os_vif_util [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.093 230014 DEBUG nova.network.os_vif_util [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a4:f1:71,bridge_name='br-int',has_traffic_filtering=True,id=e962e27f-80bf-4103-98ae-d8af84c6fc28,network=Network(636fec29-e18e-45f1-aabc-369f5fd0d593),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape962e27f-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.093 230014 DEBUG os_vif [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:f1:71,bridge_name='br-int',has_traffic_filtering=True,id=e962e27f-80bf-4103-98ae-d8af84c6fc28,network=Network(636fec29-e18e-45f1-aabc-369f5fd0d593),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape962e27f-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.094 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.095 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape962e27f-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.096 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.097 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.099 230014 INFO os_vif [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:f1:71,bridge_name='br-int',has_traffic_filtering=True,id=e962e27f-80bf-4103-98ae-d8af84c6fc28,network=Network(636fec29-e18e-45f1-aabc-369f5fd0d593),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape962e27f-80')
Nov 24 09:56:09 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf-userdata-shm.mount: Deactivated successfully.
Nov 24 09:56:09 compute-1 systemd[1]: var-lib-containers-storage-overlay-d411d6d830a7518401f278920271906d8077c3d66c4ff88eb3a65c883bda73c0-merged.mount: Deactivated successfully.
Nov 24 09:56:09 compute-1 podman[236411]: 2025-11-24 09:56:09.117748925 +0000 UTC m=+0.081853742 container cleanup 2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 09:56:09 compute-1 systemd[1]: libpod-conmon-2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf.scope: Deactivated successfully.
Nov 24 09:56:09 compute-1 podman[236469]: 2025-11-24 09:56:09.18174519 +0000 UTC m=+0.044668054 container remove 2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 24 09:56:09 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:09.187 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[de71a360-f523-4776-9b55-5ea4dafd112b]: (4, ('Mon Nov 24 09:56:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593 (2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf)\n2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf\nMon Nov 24 09:56:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593 (2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf)\n2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:09 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:09.189 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e8248f-8427-403a-87cc-822bffeb205d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:09 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:09.190 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap636fec29-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:56:09 compute-1 kernel: tap636fec29-e0: left promiscuous mode
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.192 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.193 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:09 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:09.197 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[5eb97ecc-e1f9-41cd-aae4-e666448b6678]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:09 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:09.200 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.206 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:09 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:09.215 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[9faa5f49-5c43-48b4-b786-7cf76c788699]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:09 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:09.216 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[d138ee04-b4d4-48ef-a821-6be533a76708]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:09 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:09.231 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[d7dc07a5-937d-4ff6-b84d-918ef71451ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407032, 'reachable_time': 18648, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 236488, 'error': None, 'target': 'ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:09 compute-1 systemd[1]: run-netns-ovnmeta\x2d636fec29\x2de18e\x2d45f1\x2daabc\x2d369f5fd0d593.mount: Deactivated successfully.
Nov 24 09:56:09 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:09.234 142476 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 24 09:56:09 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:09.234 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[99bfec1b-747c-427a-a04a-deda899f164f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:09 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:09.235 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.366 230014 DEBUG nova.compute.manager [req-f316f06d-aba4-41ad-a258-c175537ef3fb req-291b25d9-305e-4b29-a5fe-08ce3ef64740 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-vif-unplugged-e962e27f-80bf-4103-98ae-d8af84c6fc28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.366 230014 DEBUG oslo_concurrency.lockutils [req-f316f06d-aba4-41ad-a258-c175537ef3fb req-291b25d9-305e-4b29-a5fe-08ce3ef64740 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.366 230014 DEBUG oslo_concurrency.lockutils [req-f316f06d-aba4-41ad-a258-c175537ef3fb req-291b25d9-305e-4b29-a5fe-08ce3ef64740 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.366 230014 DEBUG oslo_concurrency.lockutils [req-f316f06d-aba4-41ad-a258-c175537ef3fb req-291b25d9-305e-4b29-a5fe-08ce3ef64740 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.367 230014 DEBUG nova.compute.manager [req-f316f06d-aba4-41ad-a258-c175537ef3fb req-291b25d9-305e-4b29-a5fe-08ce3ef64740 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] No waiting events found dispatching network-vif-unplugged-e962e27f-80bf-4103-98ae-d8af84c6fc28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.367 230014 DEBUG nova.compute.manager [req-f316f06d-aba4-41ad-a258-c175537ef3fb req-291b25d9-305e-4b29-a5fe-08ce3ef64740 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-vif-unplugged-e962e27f-80bf-4103-98ae-d8af84c6fc28 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 24 09:56:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.475 230014 INFO nova.virt.libvirt.driver [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Deleting instance files /var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407_del
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.475 230014 INFO nova.virt.libvirt.driver [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Deletion of /var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407_del complete
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.536 230014 INFO nova.compute.manager [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Took 0.69 seconds to destroy the instance on the hypervisor.
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.536 230014 DEBUG oslo.service.loopingcall [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.536 230014 DEBUG nova.compute.manager [-] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 09:56:09 compute-1 nova_compute[230010]: 2025-11-24 09:56:09.537 230014 DEBUG nova.network.neutron [-] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 09:56:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:10.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:10 compute-1 nova_compute[230010]: 2025-11-24 09:56:10.528 230014 DEBUG nova.network.neutron [-] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:56:10 compute-1 nova_compute[230010]: 2025-11-24 09:56:10.542 230014 INFO nova.compute.manager [-] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Took 1.01 seconds to deallocate network for instance.
Nov 24 09:56:10 compute-1 nova_compute[230010]: 2025-11-24 09:56:10.570 230014 DEBUG nova.network.neutron [req-49bc461f-a8bb-4dff-8cfe-08d8b377f561 req-20fdecd2-0244-4b6a-a5d5-099475744c71 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updated VIF entry in instance network info cache for port e962e27f-80bf-4103-98ae-d8af84c6fc28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 09:56:10 compute-1 nova_compute[230010]: 2025-11-24 09:56:10.570 230014 DEBUG nova.network.neutron [req-49bc461f-a8bb-4dff-8cfe-08d8b377f561 req-20fdecd2-0244-4b6a-a5d5-099475744c71 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updating instance_info_cache with network_info: [{"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:56:10 compute-1 nova_compute[230010]: 2025-11-24 09:56:10.644 230014 DEBUG oslo_concurrency.lockutils [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:56:10 compute-1 nova_compute[230010]: 2025-11-24 09:56:10.645 230014 DEBUG oslo_concurrency.lockutils [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:56:10 compute-1 nova_compute[230010]: 2025-11-24 09:56:10.645 230014 DEBUG oslo_concurrency.lockutils [req-49bc461f-a8bb-4dff-8cfe-08d8b377f561 req-20fdecd2-0244-4b6a-a5d5-099475744c71 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:56:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:10.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:10 compute-1 nova_compute[230010]: 2025-11-24 09:56:10.692 230014 DEBUG oslo_concurrency.processutils [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:56:10 compute-1 ceph-mon[80009]: pgmap v878: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 1.1 KiB/s wr, 2 op/s
Nov 24 09:56:10 compute-1 nova_compute[230010]: 2025-11-24 09:56:10.778 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:10 compute-1 nova_compute[230010]: 2025-11-24 09:56:10.854 230014 DEBUG nova.compute.manager [req-768daf01-d8de-4440-be00-da7eaacf0f1f req-17e275f3-6728-4936-89ae-538f714f240e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-vif-deleted-e962e27f-80bf-4103-98ae-d8af84c6fc28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:56:10 compute-1 nova_compute[230010]: 2025-11-24 09:56:10.854 230014 INFO nova.compute.manager [req-768daf01-d8de-4440-be00-da7eaacf0f1f req-17e275f3-6728-4936-89ae-538f714f240e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Neutron deleted interface e962e27f-80bf-4103-98ae-d8af84c6fc28; detaching it from the instance and deleting it from the info cache
Nov 24 09:56:10 compute-1 nova_compute[230010]: 2025-11-24 09:56:10.855 230014 DEBUG nova.network.neutron [req-768daf01-d8de-4440-be00-da7eaacf0f1f req-17e275f3-6728-4936-89ae-538f714f240e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:56:10 compute-1 nova_compute[230010]: 2025-11-24 09:56:10.885 230014 DEBUG nova.compute.manager [req-768daf01-d8de-4440-be00-da7eaacf0f1f req-17e275f3-6728-4936-89ae-538f714f240e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Detach interface failed, port_id=e962e27f-80bf-4103-98ae-d8af84c6fc28, reason: Instance 8e009e75-a97b-4c5d-a470-5db1137cb407 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 24 09:56:11 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:56:11 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/785571714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:56:11 compute-1 nova_compute[230010]: 2025-11-24 09:56:11.150 230014 DEBUG oslo_concurrency.processutils [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:56:11 compute-1 nova_compute[230010]: 2025-11-24 09:56:11.157 230014 DEBUG nova.compute.provider_tree [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:56:11 compute-1 nova_compute[230010]: 2025-11-24 09:56:11.169 230014 DEBUG nova.scheduler.client.report [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:56:11 compute-1 nova_compute[230010]: 2025-11-24 09:56:11.190 230014 DEBUG oslo_concurrency.lockutils [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:56:11 compute-1 nova_compute[230010]: 2025-11-24 09:56:11.212 230014 INFO nova.scheduler.client.report [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Deleted allocations for instance 8e009e75-a97b-4c5d-a470-5db1137cb407
Nov 24 09:56:11 compute-1 nova_compute[230010]: 2025-11-24 09:56:11.266 230014 DEBUG oslo_concurrency.lockutils [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:56:11 compute-1 nova_compute[230010]: 2025-11-24 09:56:11.435 230014 DEBUG nova.compute.manager [req-3291dcb6-9cf0-4a72-bad6-99917c79e8a8 req-47a9f8ae-1d4a-4a63-aee9-1f82d9707ea8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-vif-plugged-e962e27f-80bf-4103-98ae-d8af84c6fc28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:56:11 compute-1 nova_compute[230010]: 2025-11-24 09:56:11.435 230014 DEBUG oslo_concurrency.lockutils [req-3291dcb6-9cf0-4a72-bad6-99917c79e8a8 req-47a9f8ae-1d4a-4a63-aee9-1f82d9707ea8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:56:11 compute-1 nova_compute[230010]: 2025-11-24 09:56:11.436 230014 DEBUG oslo_concurrency.lockutils [req-3291dcb6-9cf0-4a72-bad6-99917c79e8a8 req-47a9f8ae-1d4a-4a63-aee9-1f82d9707ea8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:56:11 compute-1 nova_compute[230010]: 2025-11-24 09:56:11.436 230014 DEBUG oslo_concurrency.lockutils [req-3291dcb6-9cf0-4a72-bad6-99917c79e8a8 req-47a9f8ae-1d4a-4a63-aee9-1f82d9707ea8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:56:11 compute-1 nova_compute[230010]: 2025-11-24 09:56:11.436 230014 DEBUG nova.compute.manager [req-3291dcb6-9cf0-4a72-bad6-99917c79e8a8 req-47a9f8ae-1d4a-4a63-aee9-1f82d9707ea8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] No waiting events found dispatching network-vif-plugged-e962e27f-80bf-4103-98ae-d8af84c6fc28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 09:56:11 compute-1 nova_compute[230010]: 2025-11-24 09:56:11.436 230014 WARNING nova.compute.manager [req-3291dcb6-9cf0-4a72-bad6-99917c79e8a8 req-47a9f8ae-1d4a-4a63-aee9-1f82d9707ea8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received unexpected event network-vif-plugged-e962e27f-80bf-4103-98ae-d8af84c6fc28 for instance with vm_state deleted and task_state None.
Nov 24 09:56:11 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/785571714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:56:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:12.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:12.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:12 compute-1 ceph-mon[80009]: pgmap v879: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 1.1 KiB/s wr, 2 op/s
Nov 24 09:56:13 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:13.237 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:56:13 compute-1 podman[236514]: 2025-11-24 09:56:13.312105742 +0000 UTC m=+0.050568786 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 09:56:13 compute-1 nova_compute[230010]: 2025-11-24 09:56:13.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:56:14 compute-1 nova_compute[230010]: 2025-11-24 09:56:14.097 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:14.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:56:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:14.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:14 compute-1 nova_compute[230010]: 2025-11-24 09:56:14.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:56:14 compute-1 nova_compute[230010]: 2025-11-24 09:56:14.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:56:14 compute-1 nova_compute[230010]: 2025-11-24 09:56:14.766 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 09:56:14 compute-1 nova_compute[230010]: 2025-11-24 09:56:14.791 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:14 compute-1 ceph-mon[80009]: pgmap v880: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 5.4 KiB/s wr, 31 op/s
Nov 24 09:56:14 compute-1 nova_compute[230010]: 2025-11-24 09:56:14.873 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:56:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:56:15 compute-1 nova_compute[230010]: 2025-11-24 09:56:15.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:56:15 compute-1 nova_compute[230010]: 2025-11-24 09:56:15.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:56:15 compute-1 nova_compute[230010]: 2025-11-24 09:56:15.779 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:15 compute-1 nova_compute[230010]: 2025-11-24 09:56:15.784 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:56:15 compute-1 nova_compute[230010]: 2025-11-24 09:56:15.784 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:56:15 compute-1 nova_compute[230010]: 2025-11-24 09:56:15.784 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:56:15 compute-1 nova_compute[230010]: 2025-11-24 09:56:15.784 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:56:15 compute-1 nova_compute[230010]: 2025-11-24 09:56:15.785 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:56:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:56:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:16.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:16 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:56:16 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3645227929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:56:16 compute-1 nova_compute[230010]: 2025-11-24 09:56:16.223 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:56:16 compute-1 nova_compute[230010]: 2025-11-24 09:56:16.379 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:56:16 compute-1 nova_compute[230010]: 2025-11-24 09:56:16.380 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4997MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:56:16 compute-1 nova_compute[230010]: 2025-11-24 09:56:16.380 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:56:16 compute-1 nova_compute[230010]: 2025-11-24 09:56:16.380 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:56:16 compute-1 nova_compute[230010]: 2025-11-24 09:56:16.434 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:56:16 compute-1 nova_compute[230010]: 2025-11-24 09:56:16.435 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:56:16 compute-1 nova_compute[230010]: 2025-11-24 09:56:16.450 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:56:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:56:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:16.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:56:16 compute-1 ceph-mon[80009]: pgmap v881: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Nov 24 09:56:16 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3645227929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:56:16 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:56:16 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3862106592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:56:16 compute-1 nova_compute[230010]: 2025-11-24 09:56:16.903 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:56:16 compute-1 nova_compute[230010]: 2025-11-24 09:56:16.908 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:56:16 compute-1 nova_compute[230010]: 2025-11-24 09:56:16.923 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:56:17 compute-1 nova_compute[230010]: 2025-11-24 09:56:17.119 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:56:17 compute-1 nova_compute[230010]: 2025-11-24 09:56:17.119 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:56:17 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3862106592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:56:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:56:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:18.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:56:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:56:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:18.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:56:18 compute-1 ceph-mon[80009]: pgmap v882: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 5.2 KiB/s wr, 30 op/s
Nov 24 09:56:18 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1398718278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:56:19 compute-1 nova_compute[230010]: 2025-11-24 09:56:19.120 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:56:19 compute-1 nova_compute[230010]: 2025-11-24 09:56:19.120 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 09:56:19 compute-1 nova_compute[230010]: 2025-11-24 09:56:19.121 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 09:56:19 compute-1 nova_compute[230010]: 2025-11-24 09:56:19.144 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:19 compute-1 nova_compute[230010]: 2025-11-24 09:56:19.146 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 09:56:19 compute-1 nova_compute[230010]: 2025-11-24 09:56:19.146 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:56:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:56:19 compute-1 nova_compute[230010]: 2025-11-24 09:56:19.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:56:19 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/872817273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:56:19 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2192735560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:56:19 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/4070689135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:56:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:20.057 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:56:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:20.057 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:56:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:56:20.057 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:56:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:20.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:20.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:20 compute-1 nova_compute[230010]: 2025-11-24 09:56:20.760 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:56:20 compute-1 nova_compute[230010]: 2025-11-24 09:56:20.781 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:20 compute-1 ceph-mon[80009]: pgmap v883: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Nov 24 09:56:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:22.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:22.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:22 compute-1 ceph-mon[80009]: pgmap v884: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Nov 24 09:56:24 compute-1 nova_compute[230010]: 2025-11-24 09:56:24.076 230014 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763978169.0753462, 8e009e75-a97b-4c5d-a470-5db1137cb407 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:56:24 compute-1 nova_compute[230010]: 2025-11-24 09:56:24.076 230014 INFO nova.compute.manager [-] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] VM Stopped (Lifecycle Event)
Nov 24 09:56:24 compute-1 nova_compute[230010]: 2025-11-24 09:56:24.093 230014 DEBUG nova.compute.manager [None req-c0548ac8-1ffd-4479-9a39-20974888a7d0 - - - - - -] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:56:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:24.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:24 compute-1 nova_compute[230010]: 2025-11-24 09:56:24.170 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:56:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:24.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:24 compute-1 ceph-mon[80009]: pgmap v885: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Nov 24 09:56:25 compute-1 nova_compute[230010]: 2025-11-24 09:56:25.783 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:26.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:26 compute-1 sudo[236587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:56:26 compute-1 sudo[236587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:56:26 compute-1 sudo[236587]: pam_unix(sudo:session): session closed for user root
Nov 24 09:56:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:56:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:26.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:56:26 compute-1 ceph-mon[80009]: pgmap v886: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:56:26 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/609421726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:56:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:28.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:28.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:28 compute-1 ceph-mon[80009]: pgmap v887: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:56:29 compute-1 nova_compute[230010]: 2025-11-24 09:56:29.217 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:56:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:30.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:56:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:56:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:56:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:30.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:56:30 compute-1 nova_compute[230010]: 2025-11-24 09:56:30.785 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:30 compute-1 ceph-mon[80009]: pgmap v888: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:56:30 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1983036459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:56:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:56:30 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/4078391709' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:56:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:32.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:32 compute-1 podman[236616]: 2025-11-24 09:56:32.3132287 +0000 UTC m=+0.053958340 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 09:56:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:32.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:32 compute-1 ceph-mon[80009]: pgmap v889: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:56:33 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 09:56:33 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 8425 writes, 31K keys, 8425 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8425 writes, 2022 syncs, 4.17 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1516 writes, 4428 keys, 1516 commit groups, 1.0 writes per commit group, ingest: 4.17 MB, 0.01 MB/s
                                           Interval WAL: 1516 writes, 667 syncs, 2.27 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 09:56:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:34.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:34 compute-1 nova_compute[230010]: 2025-11-24 09:56:34.219 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:56:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:34.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:34 compute-1 ceph-mon[80009]: pgmap v890: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 24 09:56:35 compute-1 nova_compute[230010]: 2025-11-24 09:56:35.786 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:36.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:36.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:37 compute-1 ceph-mon[80009]: pgmap v891: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 24 09:56:37 compute-1 podman[236640]: 2025-11-24 09:56:37.340364328 +0000 UTC m=+0.078992191 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 09:56:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:56:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:38.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:56:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:38.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:39 compute-1 ceph-mon[80009]: pgmap v892: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 24 09:56:39 compute-1 nova_compute[230010]: 2025-11-24 09:56:39.221 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:56:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:40.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:40.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:40 compute-1 nova_compute[230010]: 2025-11-24 09:56:40.788 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:41 compute-1 ceph-mon[80009]: pgmap v893: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 24 09:56:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:42.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:56:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:42.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:56:43 compute-1 ceph-mon[80009]: pgmap v894: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 24 09:56:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:44.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:44 compute-1 nova_compute[230010]: 2025-11-24 09:56:44.261 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:44 compute-1 podman[236670]: 2025-11-24 09:56:44.351709565 +0000 UTC m=+0.055892047 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 24 09:56:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:56:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:44.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:45 compute-1 ceph-mon[80009]: pgmap v895: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 24 09:56:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:56:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:56:45 compute-1 nova_compute[230010]: 2025-11-24 09:56:45.790 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:46.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:56:46 compute-1 sudo[236690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:56:46 compute-1 sudo[236690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:56:46 compute-1 sudo[236690]: pam_unix(sudo:session): session closed for user root
Nov 24 09:56:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:46.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:47 compute-1 ceph-mon[80009]: pgmap v896: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Nov 24 09:56:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:56:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:48.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:56:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:56:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:48.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:56:49 compute-1 nova_compute[230010]: 2025-11-24 09:56:49.264 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:49 compute-1 ceph-mon[80009]: pgmap v897: 353 pgs: 353 active+clean; 113 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Nov 24 09:56:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:56:49 compute-1 ovn_controller[132966]: 2025-11-24T09:56:49Z|00055|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory
Nov 24 09:56:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:50.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:56:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:50.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:56:50 compute-1 nova_compute[230010]: 2025-11-24 09:56:50.791 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:51 compute-1 ceph-mon[80009]: pgmap v898: 353 pgs: 353 active+clean; 113 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 353 KiB/s rd, 2.0 MiB/s wr, 53 op/s
Nov 24 09:56:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:52.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:52.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:53 compute-1 ceph-mon[80009]: pgmap v899: 353 pgs: 353 active+clean; 113 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 353 KiB/s rd, 2.0 MiB/s wr, 53 op/s
Nov 24 09:56:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:54.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:54 compute-1 nova_compute[230010]: 2025-11-24 09:56:54.266 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:56:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:54.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:55 compute-1 ceph-mon[80009]: pgmap v900: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 415 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Nov 24 09:56:55 compute-1 nova_compute[230010]: 2025-11-24 09:56:55.794 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:56:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:56.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.478530) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978216478606, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2358, "num_deletes": 251, "total_data_size": 6142089, "memory_usage": 6244928, "flush_reason": "Manual Compaction"}
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978216498679, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 3984471, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26069, "largest_seqno": 28422, "table_properties": {"data_size": 3975086, "index_size": 5879, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19668, "raw_average_key_size": 20, "raw_value_size": 3956079, "raw_average_value_size": 4078, "num_data_blocks": 259, "num_entries": 970, "num_filter_entries": 970, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763978007, "oldest_key_time": 1763978007, "file_creation_time": 1763978216, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 20184 microseconds, and 7589 cpu microseconds.
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.498722) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 3984471 bytes OK
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.498747) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.500462) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.500478) EVENT_LOG_v1 {"time_micros": 1763978216500474, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.500496) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 6131644, prev total WAL file size 6131644, number of live WAL files 2.
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.501878) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(3891KB)], [51(11MB)]
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978216501945, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 16531749, "oldest_snapshot_seqno": -1}
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5851 keys, 14385643 bytes, temperature: kUnknown
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978216576029, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 14385643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14345801, "index_size": 24116, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14661, "raw_key_size": 148831, "raw_average_key_size": 25, "raw_value_size": 14239365, "raw_average_value_size": 2433, "num_data_blocks": 982, "num_entries": 5851, "num_filter_entries": 5851, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763978216, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.576246) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 14385643 bytes
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.577717) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 222.9 rd, 194.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 12.0 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 6371, records dropped: 520 output_compression: NoCompression
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.577734) EVENT_LOG_v1 {"time_micros": 1763978216577725, "job": 30, "event": "compaction_finished", "compaction_time_micros": 74159, "compaction_time_cpu_micros": 26558, "output_level": 6, "num_output_files": 1, "total_output_size": 14385643, "num_input_records": 6371, "num_output_records": 5851, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978216578633, "job": 30, "event": "table_file_deletion", "file_number": 53}
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978216580822, "job": 30, "event": "table_file_deletion", "file_number": 51}
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.501761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.580883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.580887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.580888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.580890) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:56:56 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.580891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:56:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:56.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:57 compute-1 ceph-mon[80009]: pgmap v901: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 415 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Nov 24 09:56:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:58.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:58 compute-1 ceph-mon[80009]: pgmap v902: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 421 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Nov 24 09:56:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:56:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:58.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:59 compute-1 nova_compute[230010]: 2025-11-24 09:56:59.268 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:57:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:57:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:00.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:57:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:57:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:57:00 compute-1 ceph-mon[80009]: pgmap v903: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 104 KiB/s wr, 16 op/s
Nov 24 09:57:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:57:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:57:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:00.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:57:00 compute-1 nova_compute[230010]: 2025-11-24 09:57:00.831 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/1740854011' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:57:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/1740854011' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:57:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:02.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:02 compute-1 nova_compute[230010]: 2025-11-24 09:57:02.503 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "9558b085-fcfb-4cae-87bc-2840f81734fc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:02 compute-1 nova_compute[230010]: 2025-11-24 09:57:02.503 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:02 compute-1 nova_compute[230010]: 2025-11-24 09:57:02.520 230014 DEBUG nova.compute.manager [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 09:57:02 compute-1 nova_compute[230010]: 2025-11-24 09:57:02.583 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:02 compute-1 nova_compute[230010]: 2025-11-24 09:57:02.584 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:02 compute-1 nova_compute[230010]: 2025-11-24 09:57:02.589 230014 DEBUG nova.virt.hardware [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 09:57:02 compute-1 nova_compute[230010]: 2025-11-24 09:57:02.589 230014 INFO nova.compute.claims [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Claim successful on node compute-1.ctlplane.example.com
Nov 24 09:57:02 compute-1 ceph-mon[80009]: pgmap v904: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 104 KiB/s wr, 16 op/s
Nov 24 09:57:02 compute-1 nova_compute[230010]: 2025-11-24 09:57:02.682 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:57:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:02.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:02 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 09:57:02 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5342 writes, 28K keys, 5342 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.04 MB/s
                                           Cumulative WAL: 5342 writes, 5342 syncs, 1.00 writes per sync, written: 0.07 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1501 writes, 7297 keys, 1501 commit groups, 1.0 writes per commit group, ingest: 16.91 MB, 0.03 MB/s
                                           Interval WAL: 1501 writes, 1501 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    139.7      0.31              0.10        15    0.021       0      0       0.0       0.0
                                             L6      1/0   13.72 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.1    157.2    134.6      1.33              0.43        14    0.095     74K   7374       0.0       0.0
                                            Sum      1/0   13.72 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.1    127.2    135.6      1.64              0.54        29    0.057     74K   7374       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.9    170.5    173.6      0.44              0.20        10    0.044     30K   2555       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    157.2    134.6      1.33              0.43        14    0.095     74K   7374       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    140.5      0.31              0.10        14    0.022       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.043, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.22 GB write, 0.12 MB/s write, 0.20 GB read, 0.12 MB/s read, 1.6 seconds
                                           Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a5fe7f5350#2 capacity: 304.00 MB usage: 17.30 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000112 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(933,16.70 MB,5.49298%) FilterBlock(29,220.48 KB,0.0708279%) IndexBlock(29,390.20 KB,0.125348%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 09:57:03 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:57:03 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2841874782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:03 compute-1 nova_compute[230010]: 2025-11-24 09:57:03.135 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:57:03 compute-1 nova_compute[230010]: 2025-11-24 09:57:03.142 230014 DEBUG nova.compute.provider_tree [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:57:03 compute-1 nova_compute[230010]: 2025-11-24 09:57:03.154 230014 DEBUG nova.scheduler.client.report [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:57:03 compute-1 nova_compute[230010]: 2025-11-24 09:57:03.177 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:03 compute-1 nova_compute[230010]: 2025-11-24 09:57:03.177 230014 DEBUG nova.compute.manager [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 09:57:03 compute-1 nova_compute[230010]: 2025-11-24 09:57:03.225 230014 DEBUG nova.compute.manager [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 09:57:03 compute-1 nova_compute[230010]: 2025-11-24 09:57:03.225 230014 DEBUG nova.network.neutron [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 09:57:03 compute-1 nova_compute[230010]: 2025-11-24 09:57:03.254 230014 INFO nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 09:57:03 compute-1 nova_compute[230010]: 2025-11-24 09:57:03.268 230014 DEBUG nova.compute.manager [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 09:57:03 compute-1 podman[236745]: 2025-11-24 09:57:03.334601688 +0000 UTC m=+0.078723976 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:57:03 compute-1 nova_compute[230010]: 2025-11-24 09:57:03.369 230014 DEBUG nova.compute.manager [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 09:57:03 compute-1 nova_compute[230010]: 2025-11-24 09:57:03.371 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 09:57:03 compute-1 nova_compute[230010]: 2025-11-24 09:57:03.371 230014 INFO nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Creating image(s)
Nov 24 09:57:03 compute-1 nova_compute[230010]: 2025-11-24 09:57:03.392 230014 DEBUG nova.storage.rbd_utils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 9558b085-fcfb-4cae-87bc-2840f81734fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:57:03 compute-1 nova_compute[230010]: 2025-11-24 09:57:03.413 230014 DEBUG nova.storage.rbd_utils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 9558b085-fcfb-4cae-87bc-2840f81734fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:57:03 compute-1 nova_compute[230010]: 2025-11-24 09:57:03.436 230014 DEBUG nova.storage.rbd_utils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 9558b085-fcfb-4cae-87bc-2840f81734fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:57:03 compute-1 nova_compute[230010]: 2025-11-24 09:57:03.440 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:57:03 compute-1 nova_compute[230010]: 2025-11-24 09:57:03.493 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:57:03 compute-1 nova_compute[230010]: 2025-11-24 09:57:03.494 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "2ed5c667523487159c4c4503c82babbc95dbae40" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:03 compute-1 nova_compute[230010]: 2025-11-24 09:57:03.494 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:03 compute-1 nova_compute[230010]: 2025-11-24 09:57:03.495 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:03 compute-1 nova_compute[230010]: 2025-11-24 09:57:03.513 230014 DEBUG nova.storage.rbd_utils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 9558b085-fcfb-4cae-87bc-2840f81734fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:57:03 compute-1 nova_compute[230010]: 2025-11-24 09:57:03.516 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 9558b085-fcfb-4cae-87bc-2840f81734fc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:57:03 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2841874782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:03 compute-1 nova_compute[230010]: 2025-11-24 09:57:03.738 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 9558b085-fcfb-4cae-87bc-2840f81734fc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:57:03 compute-1 nova_compute[230010]: 2025-11-24 09:57:03.811 230014 DEBUG nova.storage.rbd_utils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] resizing rbd image 9558b085-fcfb-4cae-87bc-2840f81734fc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 24 09:57:04 compute-1 nova_compute[230010]: 2025-11-24 09:57:04.059 230014 DEBUG nova.objects.instance [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'migration_context' on Instance uuid 9558b085-fcfb-4cae-87bc-2840f81734fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:57:04 compute-1 nova_compute[230010]: 2025-11-24 09:57:04.074 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 09:57:04 compute-1 nova_compute[230010]: 2025-11-24 09:57:04.074 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Ensure instance console log exists: /var/lib/nova/instances/9558b085-fcfb-4cae-87bc-2840f81734fc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 09:57:04 compute-1 nova_compute[230010]: 2025-11-24 09:57:04.075 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:04 compute-1 nova_compute[230010]: 2025-11-24 09:57:04.075 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:04 compute-1 nova_compute[230010]: 2025-11-24 09:57:04.075 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:04.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:04 compute-1 nova_compute[230010]: 2025-11-24 09:57:04.322 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:04 compute-1 nova_compute[230010]: 2025-11-24 09:57:04.334 230014 DEBUG nova.policy [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '43f79ff3105e4372a3c095e8057d4f1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '94d069fc040647d5a6e54894eec915fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 09:57:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:57:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:04.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:04 compute-1 ceph-mon[80009]: pgmap v905: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 107 KiB/s wr, 17 op/s
Nov 24 09:57:05 compute-1 nova_compute[230010]: 2025-11-24 09:57:05.834 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:06.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:06 compute-1 sudo[236933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:57:06 compute-1 sudo[236933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:57:06 compute-1 sudo[236933]: pam_unix(sudo:session): session closed for user root
Nov 24 09:57:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:06.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:06 compute-1 ceph-mon[80009]: pgmap v906: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 16 KiB/s wr, 2 op/s
Nov 24 09:57:07 compute-1 nova_compute[230010]: 2025-11-24 09:57:07.359 230014 DEBUG nova.network.neutron [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Successfully created port: f43553d8-3872-4217-8259-57949e64eab2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 24 09:57:08 compute-1 sudo[236959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:57:08 compute-1 sudo[236959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:57:08 compute-1 sudo[236959]: pam_unix(sudo:session): session closed for user root
Nov 24 09:57:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:08.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:08 compute-1 sudo[236990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:57:08 compute-1 sudo[236990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:57:08 compute-1 podman[236983]: 2025-11-24 09:57:08.220441383 +0000 UTC m=+0.080766305 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 09:57:08 compute-1 nova_compute[230010]: 2025-11-24 09:57:08.405 230014 DEBUG nova.network.neutron [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Successfully updated port: f43553d8-3872-4217-8259-57949e64eab2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 09:57:08 compute-1 nova_compute[230010]: 2025-11-24 09:57:08.419 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-9558b085-fcfb-4cae-87bc-2840f81734fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:57:08 compute-1 nova_compute[230010]: 2025-11-24 09:57:08.419 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-9558b085-fcfb-4cae-87bc-2840f81734fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:57:08 compute-1 nova_compute[230010]: 2025-11-24 09:57:08.419 230014 DEBUG nova.network.neutron [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 09:57:08 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:57:08 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:57:08 compute-1 nova_compute[230010]: 2025-11-24 09:57:08.500 230014 DEBUG nova.compute.manager [req-5d756cf4-d91c-4882-a171-c87963f7da21 req-060e4b75-ef8e-433c-8600-55b18b6744c9 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Received event network-changed-f43553d8-3872-4217-8259-57949e64eab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:57:08 compute-1 nova_compute[230010]: 2025-11-24 09:57:08.500 230014 DEBUG nova.compute.manager [req-5d756cf4-d91c-4882-a171-c87963f7da21 req-060e4b75-ef8e-433c-8600-55b18b6744c9 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Refreshing instance network info cache due to event network-changed-f43553d8-3872-4217-8259-57949e64eab2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 09:57:08 compute-1 nova_compute[230010]: 2025-11-24 09:57:08.500 230014 DEBUG oslo_concurrency.lockutils [req-5d756cf4-d91c-4882-a171-c87963f7da21 req-060e4b75-ef8e-433c-8600-55b18b6744c9 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-9558b085-fcfb-4cae-87bc-2840f81734fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:57:08 compute-1 nova_compute[230010]: 2025-11-24 09:57:08.573 230014 DEBUG nova.network.neutron [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 09:57:08 compute-1 sudo[236990]: pam_unix(sudo:session): session closed for user root
Nov 24 09:57:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:08.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:08 compute-1 sudo[237065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:57:08 compute-1 sudo[237065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:57:08 compute-1 sudo[237065]: pam_unix(sudo:session): session closed for user root
Nov 24 09:57:08 compute-1 sudo[237090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- inventory --format=json-pretty --filter-for-batch
Nov 24 09:57:08 compute-1 sudo[237090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:57:09 compute-1 podman[237155]: 2025-11-24 09:57:09.280101389 +0000 UTC m=+0.048635010 container create 7619d2205037a3da0fe5165b70f086708197e9e8eeadef3db8665c3793b961b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_turing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1)
Nov 24 09:57:09 compute-1 systemd[1]: Started libpod-conmon-7619d2205037a3da0fe5165b70f086708197e9e8eeadef3db8665c3793b961b2.scope.
Nov 24 09:57:09 compute-1 nova_compute[230010]: 2025-11-24 09:57:09.325 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:09 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:57:09 compute-1 podman[237155]: 2025-11-24 09:57:09.263226227 +0000 UTC m=+0.031759868 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:57:09 compute-1 podman[237155]: 2025-11-24 09:57:09.36565102 +0000 UTC m=+0.134184651 container init 7619d2205037a3da0fe5165b70f086708197e9e8eeadef3db8665c3793b961b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 09:57:09 compute-1 podman[237155]: 2025-11-24 09:57:09.372284423 +0000 UTC m=+0.140818044 container start 7619d2205037a3da0fe5165b70f086708197e9e8eeadef3db8665c3793b961b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:57:09 compute-1 podman[237155]: 2025-11-24 09:57:09.375864301 +0000 UTC m=+0.144397922 container attach 7619d2205037a3da0fe5165b70f086708197e9e8eeadef3db8665c3793b961b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_turing, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 09:57:09 compute-1 heuristic_turing[237171]: 167 167
Nov 24 09:57:09 compute-1 systemd[1]: libpod-7619d2205037a3da0fe5165b70f086708197e9e8eeadef3db8665c3793b961b2.scope: Deactivated successfully.
Nov 24 09:57:09 compute-1 podman[237155]: 2025-11-24 09:57:09.379287384 +0000 UTC m=+0.147821005 container died 7619d2205037a3da0fe5165b70f086708197e9e8eeadef3db8665c3793b961b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_turing, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:57:09 compute-1 systemd[1]: var-lib-containers-storage-overlay-99d761b0b0e6dc4f0334fcf7277b12378a45749a2f8f0117d1e4942328930411-merged.mount: Deactivated successfully.
Nov 24 09:57:09 compute-1 podman[237155]: 2025-11-24 09:57:09.420199044 +0000 UTC m=+0.188732665 container remove 7619d2205037a3da0fe5165b70f086708197e9e8eeadef3db8665c3793b961b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 09:57:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:57:09 compute-1 systemd[1]: libpod-conmon-7619d2205037a3da0fe5165b70f086708197e9e8eeadef3db8665c3793b961b2.scope: Deactivated successfully.
Nov 24 09:57:09 compute-1 ceph-mon[80009]: pgmap v907: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 24 09:57:09 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:09 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:09 compute-1 podman[237194]: 2025-11-24 09:57:09.587052523 +0000 UTC m=+0.039714491 container create 1ee92c2dae0cc9561a8c5f88aa9ba2b13408f8dd25de9405d89910547b06a4ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:57:09 compute-1 systemd[1]: Started libpod-conmon-1ee92c2dae0cc9561a8c5f88aa9ba2b13408f8dd25de9405d89910547b06a4ce.scope.
Nov 24 09:57:09 compute-1 nova_compute[230010]: 2025-11-24 09:57:09.629 230014 DEBUG nova.network.neutron [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Updating instance_info_cache with network_info: [{"id": "f43553d8-3872-4217-8259-57949e64eab2", "address": "fa:16:3e:58:35:61", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf43553d8-38", "ovs_interfaceid": "f43553d8-3872-4217-8259-57949e64eab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:57:09 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:57:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ffdc502dd416ef234a6305b4880d8546625fd84228039774c7b1469f11e0cd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:57:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ffdc502dd416ef234a6305b4880d8546625fd84228039774c7b1469f11e0cd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:57:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ffdc502dd416ef234a6305b4880d8546625fd84228039774c7b1469f11e0cd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:57:09 compute-1 podman[237194]: 2025-11-24 09:57:09.570745185 +0000 UTC m=+0.023407173 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:57:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ffdc502dd416ef234a6305b4880d8546625fd84228039774c7b1469f11e0cd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:57:09 compute-1 podman[237194]: 2025-11-24 09:57:09.678737945 +0000 UTC m=+0.131399913 container init 1ee92c2dae0cc9561a8c5f88aa9ba2b13408f8dd25de9405d89910547b06a4ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 24 09:57:09 compute-1 podman[237194]: 2025-11-24 09:57:09.684818443 +0000 UTC m=+0.137480411 container start 1ee92c2dae0cc9561a8c5f88aa9ba2b13408f8dd25de9405d89910547b06a4ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:57:09 compute-1 podman[237194]: 2025-11-24 09:57:09.688140945 +0000 UTC m=+0.140802913 container attach 1ee92c2dae0cc9561a8c5f88aa9ba2b13408f8dd25de9405d89910547b06a4ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:57:09 compute-1 nova_compute[230010]: 2025-11-24 09:57:09.713 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-9558b085-fcfb-4cae-87bc-2840f81734fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:57:09 compute-1 nova_compute[230010]: 2025-11-24 09:57:09.713 230014 DEBUG nova.compute.manager [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Instance network_info: |[{"id": "f43553d8-3872-4217-8259-57949e64eab2", "address": "fa:16:3e:58:35:61", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf43553d8-38", "ovs_interfaceid": "f43553d8-3872-4217-8259-57949e64eab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 09:57:09 compute-1 nova_compute[230010]: 2025-11-24 09:57:09.714 230014 DEBUG oslo_concurrency.lockutils [req-5d756cf4-d91c-4882-a171-c87963f7da21 req-060e4b75-ef8e-433c-8600-55b18b6744c9 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-9558b085-fcfb-4cae-87bc-2840f81734fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:57:09 compute-1 nova_compute[230010]: 2025-11-24 09:57:09.714 230014 DEBUG nova.network.neutron [req-5d756cf4-d91c-4882-a171-c87963f7da21 req-060e4b75-ef8e-433c-8600-55b18b6744c9 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Refreshing network info cache for port f43553d8-3872-4217-8259-57949e64eab2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 09:57:09 compute-1 nova_compute[230010]: 2025-11-24 09:57:09.716 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Start _get_guest_xml network_info=[{"id": "f43553d8-3872-4217-8259-57949e64eab2", "address": "fa:16:3e:58:35:61", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf43553d8-38", "ovs_interfaceid": "f43553d8-3872-4217-8259-57949e64eab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '6ef14bdf-4f04-4400-8040-4409d9d5271e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 09:57:09 compute-1 nova_compute[230010]: 2025-11-24 09:57:09.721 230014 WARNING nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:57:09 compute-1 nova_compute[230010]: 2025-11-24 09:57:09.728 230014 DEBUG nova.virt.libvirt.host [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 09:57:09 compute-1 nova_compute[230010]: 2025-11-24 09:57:09.730 230014 DEBUG nova.virt.libvirt.host [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 09:57:09 compute-1 nova_compute[230010]: 2025-11-24 09:57:09.733 230014 DEBUG nova.virt.libvirt.host [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 09:57:09 compute-1 nova_compute[230010]: 2025-11-24 09:57:09.733 230014 DEBUG nova.virt.libvirt.host [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 09:57:09 compute-1 nova_compute[230010]: 2025-11-24 09:57:09.734 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 09:57:09 compute-1 nova_compute[230010]: 2025-11-24 09:57:09.734 230014 DEBUG nova.virt.hardware [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T09:52:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='4a5d03ad-925b-45f1-89bd-f1325f9f3292',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 09:57:09 compute-1 nova_compute[230010]: 2025-11-24 09:57:09.734 230014 DEBUG nova.virt.hardware [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 09:57:09 compute-1 nova_compute[230010]: 2025-11-24 09:57:09.735 230014 DEBUG nova.virt.hardware [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 09:57:09 compute-1 nova_compute[230010]: 2025-11-24 09:57:09.735 230014 DEBUG nova.virt.hardware [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 09:57:09 compute-1 nova_compute[230010]: 2025-11-24 09:57:09.735 230014 DEBUG nova.virt.hardware [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 09:57:09 compute-1 nova_compute[230010]: 2025-11-24 09:57:09.736 230014 DEBUG nova.virt.hardware [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 09:57:09 compute-1 nova_compute[230010]: 2025-11-24 09:57:09.736 230014 DEBUG nova.virt.hardware [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 09:57:09 compute-1 nova_compute[230010]: 2025-11-24 09:57:09.736 230014 DEBUG nova.virt.hardware [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 09:57:09 compute-1 nova_compute[230010]: 2025-11-24 09:57:09.736 230014 DEBUG nova.virt.hardware [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 09:57:09 compute-1 nova_compute[230010]: 2025-11-24 09:57:09.737 230014 DEBUG nova.virt.hardware [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 09:57:09 compute-1 nova_compute[230010]: 2025-11-24 09:57:09.737 230014 DEBUG nova.virt.hardware [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 09:57:09 compute-1 nova_compute[230010]: 2025-11-24 09:57:09.739 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:57:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 09:57:10 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/146065392' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.187 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:57:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:10.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.216 230014 DEBUG nova.storage.rbd_utils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 9558b085-fcfb-4cae-87bc-2840f81734fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.220 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:57:10 compute-1 infallible_banzai[237210]: [
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:     {
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:         "available": false,
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:         "being_replaced": false,
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:         "ceph_device_lvm": false,
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:         "lsm_data": {},
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:         "lvs": [],
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:         "path": "/dev/sr0",
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:         "rejected_reasons": [
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             "Has a FileSystem",
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             "Insufficient space (<5GB)"
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:         ],
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:         "sys_api": {
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             "actuators": null,
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             "device_nodes": [
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:                 "sr0"
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             ],
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             "devname": "sr0",
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             "human_readable_size": "482.00 KB",
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             "id_bus": "ata",
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             "model": "QEMU DVD-ROM",
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             "nr_requests": "2",
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             "parent": "/dev/sr0",
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             "partitions": {},
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             "path": "/dev/sr0",
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             "removable": "1",
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             "rev": "2.5+",
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             "ro": "0",
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             "rotational": "1",
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             "sas_address": "",
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             "sas_device_handle": "",
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             "scheduler_mode": "mq-deadline",
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             "sectors": 0,
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             "sectorsize": "2048",
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             "size": 493568.0,
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             "support_discard": "2048",
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             "type": "disk",
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:             "vendor": "QEMU"
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:         }
Nov 24 09:57:10 compute-1 infallible_banzai[237210]:     }
Nov 24 09:57:10 compute-1 infallible_banzai[237210]: ]
Nov 24 09:57:10 compute-1 systemd[1]: libpod-1ee92c2dae0cc9561a8c5f88aa9ba2b13408f8dd25de9405d89910547b06a4ce.scope: Deactivated successfully.
Nov 24 09:57:10 compute-1 podman[237194]: 2025-11-24 09:57:10.44996437 +0000 UTC m=+0.902626348 container died 1ee92c2dae0cc9561a8c5f88aa9ba2b13408f8dd25de9405d89910547b06a4ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_banzai, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:57:10 compute-1 systemd[1]: var-lib-containers-storage-overlay-1ffdc502dd416ef234a6305b4880d8546625fd84228039774c7b1469f11e0cd3-merged.mount: Deactivated successfully.
Nov 24 09:57:10 compute-1 podman[237194]: 2025-11-24 09:57:10.499524591 +0000 UTC m=+0.952186559 container remove 1ee92c2dae0cc9561a8c5f88aa9ba2b13408f8dd25de9405d89910547b06a4ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:57:10 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/146065392' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:57:10 compute-1 systemd[1]: libpod-conmon-1ee92c2dae0cc9561a8c5f88aa9ba2b13408f8dd25de9405d89910547b06a4ce.scope: Deactivated successfully.
Nov 24 09:57:10 compute-1 sudo[237090]: pam_unix(sudo:session): session closed for user root
Nov 24 09:57:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:57:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:57:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 09:57:10 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4164713245' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.684 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.687 230014 DEBUG nova.virt.libvirt.vif [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T09:57:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-826525135',display_name='tempest-TestNetworkBasicOps-server-826525135',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-826525135',id=5,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC35slzFXIscG7yI0ldCNK4vlvp0/JkuMYp+G9aKEuW9NB0+nlUoAY9//FD0F8qY2c6aehGz4dqJCwd0w9isq9P1Emwaoz7MA2BbTfYqIAVwl0HpYimM2CBxhvzKgVHsXQ==',key_name='tempest-TestNetworkBasicOps-569358808',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-epqclak3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T09:57:03Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=9558b085-fcfb-4cae-87bc-2840f81734fc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f43553d8-3872-4217-8259-57949e64eab2", "address": "fa:16:3e:58:35:61", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf43553d8-38", "ovs_interfaceid": "f43553d8-3872-4217-8259-57949e64eab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.688 230014 DEBUG nova.network.os_vif_util [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "f43553d8-3872-4217-8259-57949e64eab2", "address": "fa:16:3e:58:35:61", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf43553d8-38", "ovs_interfaceid": "f43553d8-3872-4217-8259-57949e64eab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.689 230014 DEBUG nova.network.os_vif_util [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:35:61,bridge_name='br-int',has_traffic_filtering=True,id=f43553d8-3872-4217-8259-57949e64eab2,network=Network(4a54e00b-2ddf-4829-be22-9a556b586781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf43553d8-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.691 230014 DEBUG nova.objects.instance [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'pci_devices' on Instance uuid 9558b085-fcfb-4cae-87bc-2840f81734fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.707 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] End _get_guest_xml xml=<domain type="kvm">
Nov 24 09:57:10 compute-1 nova_compute[230010]:   <uuid>9558b085-fcfb-4cae-87bc-2840f81734fc</uuid>
Nov 24 09:57:10 compute-1 nova_compute[230010]:   <name>instance-00000005</name>
Nov 24 09:57:10 compute-1 nova_compute[230010]:   <memory>131072</memory>
Nov 24 09:57:10 compute-1 nova_compute[230010]:   <vcpu>1</vcpu>
Nov 24 09:57:10 compute-1 nova_compute[230010]:   <metadata>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <nova:name>tempest-TestNetworkBasicOps-server-826525135</nova:name>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <nova:creationTime>2025-11-24 09:57:09</nova:creationTime>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <nova:flavor name="m1.nano">
Nov 24 09:57:10 compute-1 nova_compute[230010]:         <nova:memory>128</nova:memory>
Nov 24 09:57:10 compute-1 nova_compute[230010]:         <nova:disk>1</nova:disk>
Nov 24 09:57:10 compute-1 nova_compute[230010]:         <nova:swap>0</nova:swap>
Nov 24 09:57:10 compute-1 nova_compute[230010]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 09:57:10 compute-1 nova_compute[230010]:         <nova:vcpus>1</nova:vcpus>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       </nova:flavor>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <nova:owner>
Nov 24 09:57:10 compute-1 nova_compute[230010]:         <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 09:57:10 compute-1 nova_compute[230010]:         <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       </nova:owner>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <nova:ports>
Nov 24 09:57:10 compute-1 nova_compute[230010]:         <nova:port uuid="f43553d8-3872-4217-8259-57949e64eab2">
Nov 24 09:57:10 compute-1 nova_compute[230010]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:         </nova:port>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       </nova:ports>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     </nova:instance>
Nov 24 09:57:10 compute-1 nova_compute[230010]:   </metadata>
Nov 24 09:57:10 compute-1 nova_compute[230010]:   <sysinfo type="smbios">
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <system>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <entry name="manufacturer">RDO</entry>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <entry name="product">OpenStack Compute</entry>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <entry name="serial">9558b085-fcfb-4cae-87bc-2840f81734fc</entry>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <entry name="uuid">9558b085-fcfb-4cae-87bc-2840f81734fc</entry>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <entry name="family">Virtual Machine</entry>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     </system>
Nov 24 09:57:10 compute-1 nova_compute[230010]:   </sysinfo>
Nov 24 09:57:10 compute-1 nova_compute[230010]:   <os>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <boot dev="hd"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <smbios mode="sysinfo"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:   </os>
Nov 24 09:57:10 compute-1 nova_compute[230010]:   <features>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <acpi/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <apic/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <vmcoreinfo/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:   </features>
Nov 24 09:57:10 compute-1 nova_compute[230010]:   <clock offset="utc">
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <timer name="hpet" present="no"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:   </clock>
Nov 24 09:57:10 compute-1 nova_compute[230010]:   <cpu mode="host-model" match="exact">
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:   </cpu>
Nov 24 09:57:10 compute-1 nova_compute[230010]:   <devices>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <disk type="network" device="disk">
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <driver type="raw" cache="none"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <source protocol="rbd" name="vms/9558b085-fcfb-4cae-87bc-2840f81734fc_disk">
Nov 24 09:57:10 compute-1 nova_compute[230010]:         <host name="192.168.122.100" port="6789"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:         <host name="192.168.122.102" port="6789"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:         <host name="192.168.122.101" port="6789"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       </source>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <auth username="openstack">
Nov 24 09:57:10 compute-1 nova_compute[230010]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       </auth>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <target dev="vda" bus="virtio"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     </disk>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <disk type="network" device="cdrom">
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <driver type="raw" cache="none"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <source protocol="rbd" name="vms/9558b085-fcfb-4cae-87bc-2840f81734fc_disk.config">
Nov 24 09:57:10 compute-1 nova_compute[230010]:         <host name="192.168.122.100" port="6789"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:         <host name="192.168.122.102" port="6789"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:         <host name="192.168.122.101" port="6789"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       </source>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <auth username="openstack">
Nov 24 09:57:10 compute-1 nova_compute[230010]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       </auth>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <target dev="sda" bus="sata"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     </disk>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <interface type="ethernet">
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <mac address="fa:16:3e:58:35:61"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <model type="virtio"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <mtu size="1442"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <target dev="tapf43553d8-38"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     </interface>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <serial type="pty">
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <log file="/var/lib/nova/instances/9558b085-fcfb-4cae-87bc-2840f81734fc/console.log" append="off"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     </serial>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <video>
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <model type="virtio"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     </video>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <input type="tablet" bus="usb"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <rng model="virtio">
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <backend model="random">/dev/urandom</backend>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     </rng>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <controller type="usb" index="0"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     <memballoon model="virtio">
Nov 24 09:57:10 compute-1 nova_compute[230010]:       <stats period="10"/>
Nov 24 09:57:10 compute-1 nova_compute[230010]:     </memballoon>
Nov 24 09:57:10 compute-1 nova_compute[230010]:   </devices>
Nov 24 09:57:10 compute-1 nova_compute[230010]: </domain>
Nov 24 09:57:10 compute-1 nova_compute[230010]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.709 230014 DEBUG nova.compute.manager [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Preparing to wait for external event network-vif-plugged-f43553d8-3872-4217-8259-57949e64eab2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.709 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.709 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.709 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.710 230014 DEBUG nova.virt.libvirt.vif [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T09:57:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-826525135',display_name='tempest-TestNetworkBasicOps-server-826525135',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-826525135',id=5,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC35slzFXIscG7yI0ldCNK4vlvp0/JkuMYp+G9aKEuW9NB0+nlUoAY9//FD0F8qY2c6aehGz4dqJCwd0w9isq9P1Emwaoz7MA2BbTfYqIAVwl0HpYimM2CBxhvzKgVHsXQ==',key_name='tempest-TestNetworkBasicOps-569358808',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-epqclak3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T09:57:03Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=9558b085-fcfb-4cae-87bc-2840f81734fc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f43553d8-3872-4217-8259-57949e64eab2", "address": "fa:16:3e:58:35:61", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf43553d8-38", "ovs_interfaceid": "f43553d8-3872-4217-8259-57949e64eab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.710 230014 DEBUG nova.network.os_vif_util [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "f43553d8-3872-4217-8259-57949e64eab2", "address": "fa:16:3e:58:35:61", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf43553d8-38", "ovs_interfaceid": "f43553d8-3872-4217-8259-57949e64eab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.711 230014 DEBUG nova.network.os_vif_util [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:35:61,bridge_name='br-int',has_traffic_filtering=True,id=f43553d8-3872-4217-8259-57949e64eab2,network=Network(4a54e00b-2ddf-4829-be22-9a556b586781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf43553d8-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.711 230014 DEBUG os_vif [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:35:61,bridge_name='br-int',has_traffic_filtering=True,id=f43553d8-3872-4217-8259-57949e64eab2,network=Network(4a54e00b-2ddf-4829-be22-9a556b586781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf43553d8-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.712 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.712 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.712 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.715 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.716 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf43553d8-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.716 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf43553d8-38, col_values=(('external_ids', {'iface-id': 'f43553d8-3872-4217-8259-57949e64eab2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:35:61', 'vm-uuid': '9558b085-fcfb-4cae-87bc-2840f81734fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:57:10 compute-1 NetworkManager[48870]: <info>  [1763978230.7631] manager: (tapf43553d8-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Nov 24 09:57:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:57:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:10.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.762 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.767 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.769 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.771 230014 INFO os_vif [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:35:61,bridge_name='br-int',has_traffic_filtering=True,id=f43553d8-3872-4217-8259-57949e64eab2,network=Network(4a54e00b-2ddf-4829-be22-9a556b586781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf43553d8-38')
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.815 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.816 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.816 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:58:35:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.816 230014 INFO nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Using config drive
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.838 230014 DEBUG nova.storage.rbd_utils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 9558b085-fcfb-4cae-87bc-2840f81734fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:57:10 compute-1 nova_compute[230010]: 2025-11-24 09:57:10.843 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:57:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:57:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:57:10 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:57:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:57:10 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:57:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:57:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:57:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:57:10 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:57:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:57:10 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:57:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:57:10 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:57:11 compute-1 nova_compute[230010]: 2025-11-24 09:57:11.053 230014 DEBUG nova.network.neutron [req-5d756cf4-d91c-4882-a171-c87963f7da21 req-060e4b75-ef8e-433c-8600-55b18b6744c9 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Updated VIF entry in instance network info cache for port f43553d8-3872-4217-8259-57949e64eab2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 09:57:11 compute-1 nova_compute[230010]: 2025-11-24 09:57:11.054 230014 DEBUG nova.network.neutron [req-5d756cf4-d91c-4882-a171-c87963f7da21 req-060e4b75-ef8e-433c-8600-55b18b6744c9 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Updating instance_info_cache with network_info: [{"id": "f43553d8-3872-4217-8259-57949e64eab2", "address": "fa:16:3e:58:35:61", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf43553d8-38", "ovs_interfaceid": "f43553d8-3872-4217-8259-57949e64eab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:57:11 compute-1 nova_compute[230010]: 2025-11-24 09:57:11.071 230014 DEBUG oslo_concurrency.lockutils [req-5d756cf4-d91c-4882-a171-c87963f7da21 req-060e4b75-ef8e-433c-8600-55b18b6744c9 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-9558b085-fcfb-4cae-87bc-2840f81734fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:57:11 compute-1 nova_compute[230010]: 2025-11-24 09:57:11.428 230014 INFO nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Creating config drive at /var/lib/nova/instances/9558b085-fcfb-4cae-87bc-2840f81734fc/disk.config
Nov 24 09:57:11 compute-1 nova_compute[230010]: 2025-11-24 09:57:11.433 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9558b085-fcfb-4cae-87bc-2840f81734fc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvtbhz4e3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:57:11 compute-1 ceph-mon[80009]: pgmap v908: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 09:57:11 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:11 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:11 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/4164713245' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:57:11 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:11 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:11 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:57:11 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:57:11 compute-1 ceph-mon[80009]: pgmap v909: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.9 MiB/s wr, 30 op/s
Nov 24 09:57:11 compute-1 ceph-mon[80009]: pgmap v910: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.3 MiB/s wr, 36 op/s
Nov 24 09:57:11 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:11 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:11 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:57:11 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:57:11 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:57:11 compute-1 nova_compute[230010]: 2025-11-24 09:57:11.558 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9558b085-fcfb-4cae-87bc-2840f81734fc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvtbhz4e3" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:57:11 compute-1 nova_compute[230010]: 2025-11-24 09:57:11.588 230014 DEBUG nova.storage.rbd_utils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 9558b085-fcfb-4cae-87bc-2840f81734fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:57:11 compute-1 nova_compute[230010]: 2025-11-24 09:57:11.592 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9558b085-fcfb-4cae-87bc-2840f81734fc/disk.config 9558b085-fcfb-4cae-87bc-2840f81734fc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:57:11 compute-1 nova_compute[230010]: 2025-11-24 09:57:11.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:11 compute-1 nova_compute[230010]: 2025-11-24 09:57:11.802 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9558b085-fcfb-4cae-87bc-2840f81734fc/disk.config 9558b085-fcfb-4cae-87bc-2840f81734fc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:57:11 compute-1 nova_compute[230010]: 2025-11-24 09:57:11.802 230014 INFO nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Deleting local config drive /var/lib/nova/instances/9558b085-fcfb-4cae-87bc-2840f81734fc/disk.config because it was imported into RBD.
Nov 24 09:57:11 compute-1 kernel: tapf43553d8-38: entered promiscuous mode
Nov 24 09:57:11 compute-1 NetworkManager[48870]: <info>  [1763978231.8624] manager: (tapf43553d8-38): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Nov 24 09:57:11 compute-1 ovn_controller[132966]: 2025-11-24T09:57:11Z|00056|binding|INFO|Claiming lport f43553d8-3872-4217-8259-57949e64eab2 for this chassis.
Nov 24 09:57:11 compute-1 nova_compute[230010]: 2025-11-24 09:57:11.898 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:11 compute-1 ovn_controller[132966]: 2025-11-24T09:57:11Z|00057|binding|INFO|f43553d8-3872-4217-8259-57949e64eab2: Claiming fa:16:3e:58:35:61 10.100.0.9
Nov 24 09:57:11 compute-1 nova_compute[230010]: 2025-11-24 09:57:11.903 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:11 compute-1 systemd-machined[193537]: New machine qemu-3-instance-00000005.
Nov 24 09:57:11 compute-1 systemd-udevd[238591]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 09:57:11 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:11.935 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:35:61 10.100.0.9'], port_security=['fa:16:3e:58:35:61 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9558b085-fcfb-4cae-87bc-2840f81734fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a54e00b-2ddf-4829-be22-9a556b586781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e9b8d67b-4e9e-4fdc-b23f-05b645f04725', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cefc33a4-ddb4-430f-bd3b-965ffc7d2eca, chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=f43553d8-3872-4217-8259-57949e64eab2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:57:11 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:11.937 142336 INFO neutron.agent.ovn.metadata.agent [-] Port f43553d8-3872-4217-8259-57949e64eab2 in datapath 4a54e00b-2ddf-4829-be22-9a556b586781 bound to our chassis
Nov 24 09:57:11 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:11.938 142336 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a54e00b-2ddf-4829-be22-9a556b586781
Nov 24 09:57:11 compute-1 NetworkManager[48870]: <info>  [1763978231.9430] device (tapf43553d8-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 09:57:11 compute-1 NetworkManager[48870]: <info>  [1763978231.9440] device (tapf43553d8-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 09:57:11 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:11.953 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[c5d35878-d497-4db5-9c72-c7568714ad30]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:11 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:11.955 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4a54e00b-21 in ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 24 09:57:11 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:11.957 234803 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4a54e00b-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 24 09:57:11 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:11.957 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[52792cf5-48e5-4ccd-ae7b-923845e41e1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:11 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:11.958 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b1becf-5c4a-42d7-a630-e5a917a782a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:11 compute-1 systemd[1]: Started Virtual Machine qemu-3-instance-00000005.
Nov 24 09:57:11 compute-1 ovn_controller[132966]: 2025-11-24T09:57:11Z|00058|binding|INFO|Setting lport f43553d8-3872-4217-8259-57949e64eab2 ovn-installed in OVS
Nov 24 09:57:11 compute-1 ovn_controller[132966]: 2025-11-24T09:57:11Z|00059|binding|INFO|Setting lport f43553d8-3872-4217-8259-57949e64eab2 up in Southbound
Nov 24 09:57:11 compute-1 nova_compute[230010]: 2025-11-24 09:57:11.968 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:11 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:11.977 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[74b9a655-98f5-4ce0-ad13-248515677316]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:11 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:11.990 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[93c3e9b4-0cd7-489f-a9c3-7264110369a6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.018 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[cd1e2f94-56c6-4e56-862b-4e1fbe1c456b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.025 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[e058b5b3-b2c3-4379-aa91-a92462506f6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:12 compute-1 NetworkManager[48870]: <info>  [1763978232.0263] manager: (tap4a54e00b-20): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.061 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[62c48810-ef2e-481a-8e32-ee9cd1df2add]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.063 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[d5bf1a49-49f0-42ff-a8bf-529e1312b7c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:12 compute-1 NetworkManager[48870]: <info>  [1763978232.0847] device (tap4a54e00b-20): carrier: link connected
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.088 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[12632800-d427-4430-acf9-14e283cc9249]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.104 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[863039b8-c035-4670-9fd8-4be93b7de05a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a54e00b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:bd:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 417563, 'reachable_time': 43698, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238625, 'error': None, 'target': 'ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.121 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[8efab08a-5fdc-4142-a666-752a0c05e4fe]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feec:bdd5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 417563, 'tstamp': 417563}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238627, 'error': None, 'target': 'ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.137 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[2b3f7fa4-c229-4a68-8dab-95d16e035f45]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a54e00b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:bd:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 417563, 'reachable_time': 43698, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 238628, 'error': None, 'target': 'ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.170 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[3a4a993d-e010-4300-9277-c993e1f016a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:12.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.232 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[67d970c5-2408-484d-8c80-1685df302d4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.234 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a54e00b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.235 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.235 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a54e00b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:57:12 compute-1 kernel: tap4a54e00b-20: entered promiscuous mode
Nov 24 09:57:12 compute-1 NetworkManager[48870]: <info>  [1763978232.2377] manager: (tap4a54e00b-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.239 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.241 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a54e00b-20, col_values=(('external_ids', {'iface-id': '825c51a9-1ab7-4d33-9d7f-c9278b05a734'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:57:12 compute-1 ovn_controller[132966]: 2025-11-24T09:57:12Z|00060|binding|INFO|Releasing lport 825c51a9-1ab7-4d33-9d7f-c9278b05a734 from this chassis (sb_readonly=0)
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.244 142336 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4a54e00b-2ddf-4829-be22-9a556b586781.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4a54e00b-2ddf-4829-be22-9a556b586781.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.245 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[5efa4fcd-14a9-411d-925f-7bf919cc030c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.246 142336 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]: global
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]:     log         /dev/log local0 debug
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]:     log-tag     haproxy-metadata-proxy-4a54e00b-2ddf-4829-be22-9a556b586781
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]:     user        root
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]:     group       root
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]:     maxconn     1024
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]:     pidfile     /var/lib/neutron/external/pids/4a54e00b-2ddf-4829-be22-9a556b586781.pid.haproxy
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]:     daemon
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]: 
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]: defaults
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]:     log global
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]:     mode http
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]:     option httplog
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]:     option dontlognull
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]:     option http-server-close
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]:     option forwardfor
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]:     retries                 3
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]:     timeout http-request    30s
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]:     timeout connect         30s
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]:     timeout client          32s
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]:     timeout server          32s
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]:     timeout http-keep-alive 30s
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]: 
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]: 
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]: listen listener
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]:     bind 169.254.169.254:80
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]:     server metadata /var/lib/neutron/metadata_proxy
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]:     http-request add-header X-OVN-Network-ID 4a54e00b-2ddf-4829-be22-9a556b586781
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 24 09:57:12 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.248 142336 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781', 'env', 'PROCESS_TAG=haproxy-4a54e00b-2ddf-4829-be22-9a556b586781', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4a54e00b-2ddf-4829-be22-9a556b586781.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.256 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.594 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978232.593641, 9558b085-fcfb-4cae-87bc-2840f81734fc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.594 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] VM Started (Lifecycle Event)
Nov 24 09:57:12 compute-1 podman[238700]: 2025-11-24 09:57:12.613623816 +0000 UTC m=+0.047386780 container create 55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 24 09:57:12 compute-1 systemd[1]: Started libpod-conmon-55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8.scope.
Nov 24 09:57:12 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:57:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb21f40013aa0967f3b24322d4423a75e6cb1995a2ccd777bff1c3622c2fa39/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 09:57:12 compute-1 podman[238700]: 2025-11-24 09:57:12.59048204 +0000 UTC m=+0.024245024 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 09:57:12 compute-1 podman[238700]: 2025-11-24 09:57:12.707238095 +0000 UTC m=+0.141001089 container init 55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 24 09:57:12 compute-1 podman[238700]: 2025-11-24 09:57:12.713648301 +0000 UTC m=+0.147411265 container start 55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 24 09:57:12 compute-1 neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781[238715]: [NOTICE]   (238719) : New worker (238721) forked
Nov 24 09:57:12 compute-1 neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781[238715]: [NOTICE]   (238719) : Loading success.
Nov 24 09:57:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:12.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.879 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.882 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978232.5959396, 9558b085-fcfb-4cae-87bc-2840f81734fc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.882 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] VM Paused (Lifecycle Event)
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.899 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.903 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.911 230014 DEBUG nova.compute.manager [req-cb40eb7e-94b4-4586-be58-b8bc6878f6ce req-5d324aab-4309-403b-8fc0-166684ef7d39 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Received event network-vif-plugged-f43553d8-3872-4217-8259-57949e64eab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.911 230014 DEBUG oslo_concurrency.lockutils [req-cb40eb7e-94b4-4586-be58-b8bc6878f6ce req-5d324aab-4309-403b-8fc0-166684ef7d39 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.912 230014 DEBUG oslo_concurrency.lockutils [req-cb40eb7e-94b4-4586-be58-b8bc6878f6ce req-5d324aab-4309-403b-8fc0-166684ef7d39 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.912 230014 DEBUG oslo_concurrency.lockutils [req-cb40eb7e-94b4-4586-be58-b8bc6878f6ce req-5d324aab-4309-403b-8fc0-166684ef7d39 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.912 230014 DEBUG nova.compute.manager [req-cb40eb7e-94b4-4586-be58-b8bc6878f6ce req-5d324aab-4309-403b-8fc0-166684ef7d39 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Processing event network-vif-plugged-f43553d8-3872-4217-8259-57949e64eab2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.913 230014 DEBUG nova.compute.manager [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.917 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.920 230014 INFO nova.virt.libvirt.driver [-] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Instance spawned successfully.
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.920 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.926 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.927 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978232.9161274, 9558b085-fcfb-4cae-87bc-2840f81734fc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.927 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] VM Resumed (Lifecycle Event)
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.941 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.941 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.942 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.942 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.943 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.943 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.950 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.953 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 09:57:12 compute-1 nova_compute[230010]: 2025-11-24 09:57:12.982 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 09:57:13 compute-1 nova_compute[230010]: 2025-11-24 09:57:13.078 230014 INFO nova.compute.manager [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Took 9.71 seconds to spawn the instance on the hypervisor.
Nov 24 09:57:13 compute-1 nova_compute[230010]: 2025-11-24 09:57:13.078 230014 DEBUG nova.compute.manager [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:57:13 compute-1 nova_compute[230010]: 2025-11-24 09:57:13.191 230014 INFO nova.compute.manager [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Took 10.63 seconds to build instance.
Nov 24 09:57:13 compute-1 nova_compute[230010]: 2025-11-24 09:57:13.207 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:13 compute-1 nova_compute[230010]: 2025-11-24 09:57:13.774 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:14.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:14 compute-1 ceph-mon[80009]: pgmap v911: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.3 MiB/s wr, 38 op/s
Nov 24 09:57:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:57:14 compute-1 nova_compute[230010]: 2025-11-24 09:57:14.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:57:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:14.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:57:15 compute-1 nova_compute[230010]: 2025-11-24 09:57:15.026 230014 DEBUG nova.compute.manager [req-5fcc3e46-a99d-4916-af34-b810bf7f28a8 req-dfd22fe3-b300-4f62-8276-18acb303d475 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Received event network-vif-plugged-f43553d8-3872-4217-8259-57949e64eab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:57:15 compute-1 nova_compute[230010]: 2025-11-24 09:57:15.026 230014 DEBUG oslo_concurrency.lockutils [req-5fcc3e46-a99d-4916-af34-b810bf7f28a8 req-dfd22fe3-b300-4f62-8276-18acb303d475 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:15 compute-1 nova_compute[230010]: 2025-11-24 09:57:15.026 230014 DEBUG oslo_concurrency.lockutils [req-5fcc3e46-a99d-4916-af34-b810bf7f28a8 req-dfd22fe3-b300-4f62-8276-18acb303d475 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:15 compute-1 nova_compute[230010]: 2025-11-24 09:57:15.027 230014 DEBUG oslo_concurrency.lockutils [req-5fcc3e46-a99d-4916-af34-b810bf7f28a8 req-dfd22fe3-b300-4f62-8276-18acb303d475 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:15 compute-1 nova_compute[230010]: 2025-11-24 09:57:15.027 230014 DEBUG nova.compute.manager [req-5fcc3e46-a99d-4916-af34-b810bf7f28a8 req-dfd22fe3-b300-4f62-8276-18acb303d475 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] No waiting events found dispatching network-vif-plugged-f43553d8-3872-4217-8259-57949e64eab2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 09:57:15 compute-1 nova_compute[230010]: 2025-11-24 09:57:15.027 230014 WARNING nova.compute.manager [req-5fcc3e46-a99d-4916-af34-b810bf7f28a8 req-dfd22fe3-b300-4f62-8276-18acb303d475 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Received unexpected event network-vif-plugged-f43553d8-3872-4217-8259-57949e64eab2 for instance with vm_state active and task_state None.
Nov 24 09:57:15 compute-1 podman[238731]: 2025-11-24 09:57:15.340323356 +0000 UTC m=+0.077010594 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 09:57:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:57:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:57:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:57:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:57:15 compute-1 sudo[238751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:57:15 compute-1 sudo[238751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:57:15 compute-1 sudo[238751]: pam_unix(sudo:session): session closed for user root
Nov 24 09:57:15 compute-1 nova_compute[230010]: 2025-11-24 09:57:15.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:15 compute-1 nova_compute[230010]: 2025-11-24 09:57:15.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 09:57:15 compute-1 nova_compute[230010]: 2025-11-24 09:57:15.814 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:15 compute-1 nova_compute[230010]: 2025-11-24 09:57:15.837 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:57:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:16.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:57:16 compute-1 ceph-mon[80009]: pgmap v912: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.3 MiB/s wr, 38 op/s
Nov 24 09:57:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:57:16 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:16 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:16 compute-1 nova_compute[230010]: 2025-11-24 09:57:16.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:16.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:17 compute-1 nova_compute[230010]: 2025-11-24 09:57:17.496 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:17 compute-1 ovn_controller[132966]: 2025-11-24T09:57:17Z|00061|binding|INFO|Releasing lport 825c51a9-1ab7-4d33-9d7f-c9278b05a734 from this chassis (sb_readonly=0)
Nov 24 09:57:17 compute-1 NetworkManager[48870]: <info>  [1763978237.4997] manager: (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Nov 24 09:57:17 compute-1 NetworkManager[48870]: <info>  [1763978237.5007] manager: (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Nov 24 09:57:17 compute-1 nova_compute[230010]: 2025-11-24 09:57:17.527 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:17 compute-1 ovn_controller[132966]: 2025-11-24T09:57:17Z|00062|binding|INFO|Releasing lport 825c51a9-1ab7-4d33-9d7f-c9278b05a734 from this chassis (sb_readonly=0)
Nov 24 09:57:17 compute-1 nova_compute[230010]: 2025-11-24 09:57:17.531 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:17 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:17.694 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:57:17 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:17.695 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 09:57:17 compute-1 nova_compute[230010]: 2025-11-24 09:57:17.696 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:17 compute-1 nova_compute[230010]: 2025-11-24 09:57:17.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:17 compute-1 nova_compute[230010]: 2025-11-24 09:57:17.789 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:17 compute-1 nova_compute[230010]: 2025-11-24 09:57:17.790 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:17 compute-1 nova_compute[230010]: 2025-11-24 09:57:17.790 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:17 compute-1 nova_compute[230010]: 2025-11-24 09:57:17.791 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:57:17 compute-1 nova_compute[230010]: 2025-11-24 09:57:17.791 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:57:17 compute-1 nova_compute[230010]: 2025-11-24 09:57:17.846 230014 DEBUG nova.compute.manager [req-72a3d723-7f40-4da5-bac9-336921243068 req-47187a4d-8e7b-4cf1-8a44-6198243b3467 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Received event network-changed-f43553d8-3872-4217-8259-57949e64eab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:57:17 compute-1 nova_compute[230010]: 2025-11-24 09:57:17.847 230014 DEBUG nova.compute.manager [req-72a3d723-7f40-4da5-bac9-336921243068 req-47187a4d-8e7b-4cf1-8a44-6198243b3467 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Refreshing instance network info cache due to event network-changed-f43553d8-3872-4217-8259-57949e64eab2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 09:57:17 compute-1 nova_compute[230010]: 2025-11-24 09:57:17.847 230014 DEBUG oslo_concurrency.lockutils [req-72a3d723-7f40-4da5-bac9-336921243068 req-47187a4d-8e7b-4cf1-8a44-6198243b3467 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-9558b085-fcfb-4cae-87bc-2840f81734fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:57:17 compute-1 nova_compute[230010]: 2025-11-24 09:57:17.847 230014 DEBUG oslo_concurrency.lockutils [req-72a3d723-7f40-4da5-bac9-336921243068 req-47187a4d-8e7b-4cf1-8a44-6198243b3467 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-9558b085-fcfb-4cae-87bc-2840f81734fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:57:17 compute-1 nova_compute[230010]: 2025-11-24 09:57:17.848 230014 DEBUG nova.network.neutron [req-72a3d723-7f40-4da5-bac9-336921243068 req-47187a4d-8e7b-4cf1-8a44-6198243b3467 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Refreshing network info cache for port f43553d8-3872-4217-8259-57949e64eab2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 09:57:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:18.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:18 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:57:18 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1778182928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:18 compute-1 nova_compute[230010]: 2025-11-24 09:57:18.231 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:57:18 compute-1 nova_compute[230010]: 2025-11-24 09:57:18.301 230014 DEBUG nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 09:57:18 compute-1 nova_compute[230010]: 2025-11-24 09:57:18.303 230014 DEBUG nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 09:57:18 compute-1 ceph-mon[80009]: pgmap v913: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 16 KiB/s wr, 96 op/s
Nov 24 09:57:18 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1778182928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:18 compute-1 nova_compute[230010]: 2025-11-24 09:57:18.496 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:57:18 compute-1 nova_compute[230010]: 2025-11-24 09:57:18.497 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4792MB free_disk=59.92180252075195GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:57:18 compute-1 nova_compute[230010]: 2025-11-24 09:57:18.498 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:18 compute-1 nova_compute[230010]: 2025-11-24 09:57:18.499 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:18 compute-1 nova_compute[230010]: 2025-11-24 09:57:18.689 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Instance 9558b085-fcfb-4cae-87bc-2840f81734fc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 09:57:18 compute-1 nova_compute[230010]: 2025-11-24 09:57:18.690 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:57:18 compute-1 nova_compute[230010]: 2025-11-24 09:57:18.691 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:57:18 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:18.697 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:57:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:18.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:19 compute-1 nova_compute[230010]: 2025-11-24 09:57:19.179 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:57:19 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1551701770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:57:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:57:19 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/65500581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:19 compute-1 nova_compute[230010]: 2025-11-24 09:57:19.664 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:57:19 compute-1 nova_compute[230010]: 2025-11-24 09:57:19.672 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:57:19 compute-1 nova_compute[230010]: 2025-11-24 09:57:19.695 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:57:19 compute-1 nova_compute[230010]: 2025-11-24 09:57:19.719 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:57:19 compute-1 nova_compute[230010]: 2025-11-24 09:57:19.720 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:19 compute-1 nova_compute[230010]: 2025-11-24 09:57:19.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:19 compute-1 nova_compute[230010]: 2025-11-24 09:57:19.782 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:19 compute-1 nova_compute[230010]: 2025-11-24 09:57:19.783 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 09:57:19 compute-1 nova_compute[230010]: 2025-11-24 09:57:19.783 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 09:57:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:20.058 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:20.059 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:20.059 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:20.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:20 compute-1 nova_compute[230010]: 2025-11-24 09:57:20.300 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "refresh_cache-9558b085-fcfb-4cae-87bc-2840f81734fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:57:20 compute-1 ceph-mon[80009]: pgmap v914: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 16 KiB/s wr, 96 op/s
Nov 24 09:57:20 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/65500581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:20 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2192205247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:20.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:20 compute-1 nova_compute[230010]: 2025-11-24 09:57:20.817 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:20 compute-1 nova_compute[230010]: 2025-11-24 09:57:20.840 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:21 compute-1 nova_compute[230010]: 2025-11-24 09:57:21.254 230014 DEBUG nova.network.neutron [req-72a3d723-7f40-4da5-bac9-336921243068 req-47187a4d-8e7b-4cf1-8a44-6198243b3467 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Updated VIF entry in instance network info cache for port f43553d8-3872-4217-8259-57949e64eab2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 09:57:21 compute-1 nova_compute[230010]: 2025-11-24 09:57:21.255 230014 DEBUG nova.network.neutron [req-72a3d723-7f40-4da5-bac9-336921243068 req-47187a4d-8e7b-4cf1-8a44-6198243b3467 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Updating instance_info_cache with network_info: [{"id": "f43553d8-3872-4217-8259-57949e64eab2", "address": "fa:16:3e:58:35:61", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf43553d8-38", "ovs_interfaceid": "f43553d8-3872-4217-8259-57949e64eab2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:57:21 compute-1 nova_compute[230010]: 2025-11-24 09:57:21.295 230014 DEBUG oslo_concurrency.lockutils [req-72a3d723-7f40-4da5-bac9-336921243068 req-47187a4d-8e7b-4cf1-8a44-6198243b3467 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-9558b085-fcfb-4cae-87bc-2840f81734fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:57:21 compute-1 nova_compute[230010]: 2025-11-24 09:57:21.296 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquired lock "refresh_cache-9558b085-fcfb-4cae-87bc-2840f81734fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:57:21 compute-1 nova_compute[230010]: 2025-11-24 09:57:21.296 230014 DEBUG nova.network.neutron [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 09:57:21 compute-1 nova_compute[230010]: 2025-11-24 09:57:21.297 230014 DEBUG nova.objects.instance [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9558b085-fcfb-4cae-87bc-2840f81734fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:57:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:22.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:22 compute-1 ceph-mon[80009]: pgmap v915: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 89 op/s
Nov 24 09:57:22 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1259732043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:22.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:23 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3264970087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:23 compute-1 nova_compute[230010]: 2025-11-24 09:57:23.451 230014 DEBUG nova.network.neutron [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Updating instance_info_cache with network_info: [{"id": "f43553d8-3872-4217-8259-57949e64eab2", "address": "fa:16:3e:58:35:61", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf43553d8-38", "ovs_interfaceid": "f43553d8-3872-4217-8259-57949e64eab2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:57:23 compute-1 nova_compute[230010]: 2025-11-24 09:57:23.463 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Releasing lock "refresh_cache-9558b085-fcfb-4cae-87bc-2840f81734fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:57:23 compute-1 nova_compute[230010]: 2025-11-24 09:57:23.463 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 09:57:23 compute-1 nova_compute[230010]: 2025-11-24 09:57:23.464 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:23 compute-1 nova_compute[230010]: 2025-11-24 09:57:23.464 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:23 compute-1 nova_compute[230010]: 2025-11-24 09:57:23.464 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:23 compute-1 nova_compute[230010]: 2025-11-24 09:57:23.465 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 24 09:57:23 compute-1 nova_compute[230010]: 2025-11-24 09:57:23.475 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 24 09:57:23 compute-1 nova_compute[230010]: 2025-11-24 09:57:23.476 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:23 compute-1 nova_compute[230010]: 2025-11-24 09:57:23.476 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 24 09:57:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:24.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:24 compute-1 ceph-mon[80009]: pgmap v916: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Nov 24 09:57:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:57:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:24.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:25 compute-1 ovn_controller[132966]: 2025-11-24T09:57:25Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:58:35:61 10.100.0.9
Nov 24 09:57:25 compute-1 ovn_controller[132966]: 2025-11-24T09:57:25Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:58:35:61 10.100.0.9
Nov 24 09:57:25 compute-1 nova_compute[230010]: 2025-11-24 09:57:25.842 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 09:57:25 compute-1 nova_compute[230010]: 2025-11-24 09:57:25.845 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 09:57:25 compute-1 nova_compute[230010]: 2025-11-24 09:57:25.845 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 24 09:57:25 compute-1 nova_compute[230010]: 2025-11-24 09:57:25.845 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 09:57:25 compute-1 nova_compute[230010]: 2025-11-24 09:57:25.867 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:25 compute-1 nova_compute[230010]: 2025-11-24 09:57:25.868 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 09:57:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:57:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:26.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:57:26 compute-1 ceph-mon[80009]: pgmap v917: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 73 op/s
Nov 24 09:57:26 compute-1 nova_compute[230010]: 2025-11-24 09:57:26.478 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:26 compute-1 sudo[238828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:57:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:57:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:26.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:57:26 compute-1 sudo[238828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:57:26 compute-1 sudo[238828]: pam_unix(sudo:session): session closed for user root
Nov 24 09:57:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:28.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:28 compute-1 ceph-mon[80009]: pgmap v918: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Nov 24 09:57:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:28.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:57:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:30.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:57:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:57:30 compute-1 ceph-mon[80009]: pgmap v919: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 24 09:57:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:30.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:30 compute-1 nova_compute[230010]: 2025-11-24 09:57:30.868 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 09:57:30 compute-1 nova_compute[230010]: 2025-11-24 09:57:30.870 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:30 compute-1 nova_compute[230010]: 2025-11-24 09:57:30.870 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 24 09:57:30 compute-1 nova_compute[230010]: 2025-11-24 09:57:30.870 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 09:57:30 compute-1 nova_compute[230010]: 2025-11-24 09:57:30.870 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 09:57:30 compute-1 nova_compute[230010]: 2025-11-24 09:57:30.872 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:57:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:32.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:32 compute-1 nova_compute[230010]: 2025-11-24 09:57:32.310 230014 INFO nova.compute.manager [None req-267457df-5643-4c7d-a289-82a0bd31e250 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Get console output
Nov 24 09:57:32 compute-1 nova_compute[230010]: 2025-11-24 09:57:32.316 236028 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 24 09:57:32 compute-1 ceph-mon[80009]: pgmap v920: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 24 09:57:32 compute-1 nova_compute[230010]: 2025-11-24 09:57:32.562 230014 DEBUG oslo_concurrency.lockutils [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "9558b085-fcfb-4cae-87bc-2840f81734fc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:32 compute-1 nova_compute[230010]: 2025-11-24 09:57:32.562 230014 DEBUG oslo_concurrency.lockutils [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:32 compute-1 nova_compute[230010]: 2025-11-24 09:57:32.562 230014 DEBUG oslo_concurrency.lockutils [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:32 compute-1 nova_compute[230010]: 2025-11-24 09:57:32.563 230014 DEBUG oslo_concurrency.lockutils [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:32 compute-1 nova_compute[230010]: 2025-11-24 09:57:32.563 230014 DEBUG oslo_concurrency.lockutils [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:32 compute-1 nova_compute[230010]: 2025-11-24 09:57:32.564 230014 INFO nova.compute.manager [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Terminating instance
Nov 24 09:57:32 compute-1 nova_compute[230010]: 2025-11-24 09:57:32.565 230014 DEBUG nova.compute.manager [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 09:57:32 compute-1 kernel: tapf43553d8-38 (unregistering): left promiscuous mode
Nov 24 09:57:32 compute-1 NetworkManager[48870]: <info>  [1763978252.6208] device (tapf43553d8-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 09:57:32 compute-1 ovn_controller[132966]: 2025-11-24T09:57:32Z|00063|binding|INFO|Releasing lport f43553d8-3872-4217-8259-57949e64eab2 from this chassis (sb_readonly=0)
Nov 24 09:57:32 compute-1 ovn_controller[132966]: 2025-11-24T09:57:32Z|00064|binding|INFO|Setting lport f43553d8-3872-4217-8259-57949e64eab2 down in Southbound
Nov 24 09:57:32 compute-1 nova_compute[230010]: 2025-11-24 09:57:32.629 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:32 compute-1 ovn_controller[132966]: 2025-11-24T09:57:32Z|00065|binding|INFO|Removing iface tapf43553d8-38 ovn-installed in OVS
Nov 24 09:57:32 compute-1 nova_compute[230010]: 2025-11-24 09:57:32.632 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:32 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.637 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:35:61 10.100.0.9'], port_security=['fa:16:3e:58:35:61 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9558b085-fcfb-4cae-87bc-2840f81734fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a54e00b-2ddf-4829-be22-9a556b586781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e9b8d67b-4e9e-4fdc-b23f-05b645f04725', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cefc33a4-ddb4-430f-bd3b-965ffc7d2eca, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=f43553d8-3872-4217-8259-57949e64eab2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:57:32 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.638 142336 INFO neutron.agent.ovn.metadata.agent [-] Port f43553d8-3872-4217-8259-57949e64eab2 in datapath 4a54e00b-2ddf-4829-be22-9a556b586781 unbound from our chassis
Nov 24 09:57:32 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.639 142336 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4a54e00b-2ddf-4829-be22-9a556b586781, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 09:57:32 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.641 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[35fc0bcf-b5d4-43ce-b91a-5d6c0999a640]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:32 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.641 142336 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781 namespace which is not needed anymore
Nov 24 09:57:32 compute-1 nova_compute[230010]: 2025-11-24 09:57:32.659 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:32 compute-1 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000005.scope: Deactivated successfully.
Nov 24 09:57:32 compute-1 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000005.scope: Consumed 13.852s CPU time.
Nov 24 09:57:32 compute-1 systemd-machined[193537]: Machine qemu-3-instance-00000005 terminated.
Nov 24 09:57:32 compute-1 neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781[238715]: [NOTICE]   (238719) : haproxy version is 2.8.14-c23fe91
Nov 24 09:57:32 compute-1 neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781[238715]: [NOTICE]   (238719) : path to executable is /usr/sbin/haproxy
Nov 24 09:57:32 compute-1 neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781[238715]: [WARNING]  (238719) : Exiting Master process...
Nov 24 09:57:32 compute-1 neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781[238715]: [ALERT]    (238719) : Current worker (238721) exited with code 143 (Terminated)
Nov 24 09:57:32 compute-1 neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781[238715]: [WARNING]  (238719) : All workers exited. Exiting... (0)
Nov 24 09:57:32 compute-1 systemd[1]: libpod-55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8.scope: Deactivated successfully.
Nov 24 09:57:32 compute-1 podman[238878]: 2025-11-24 09:57:32.786137162 +0000 UTC m=+0.047402869 container died 55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 09:57:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:57:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:32.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:57:32 compute-1 nova_compute[230010]: 2025-11-24 09:57:32.800 230014 INFO nova.virt.libvirt.driver [-] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Instance destroyed successfully.
Nov 24 09:57:32 compute-1 nova_compute[230010]: 2025-11-24 09:57:32.802 230014 DEBUG nova.objects.instance [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'resources' on Instance uuid 9558b085-fcfb-4cae-87bc-2840f81734fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:57:32 compute-1 nova_compute[230010]: 2025-11-24 09:57:32.814 230014 DEBUG nova.virt.libvirt.vif [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:57:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-826525135',display_name='tempest-TestNetworkBasicOps-server-826525135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-826525135',id=5,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC35slzFXIscG7yI0ldCNK4vlvp0/JkuMYp+G9aKEuW9NB0+nlUoAY9//FD0F8qY2c6aehGz4dqJCwd0w9isq9P1Emwaoz7MA2BbTfYqIAVwl0HpYimM2CBxhvzKgVHsXQ==',key_name='tempest-TestNetworkBasicOps-569358808',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:57:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-epqclak3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:57:13Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=9558b085-fcfb-4cae-87bc-2840f81734fc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f43553d8-3872-4217-8259-57949e64eab2", "address": "fa:16:3e:58:35:61", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf43553d8-38", "ovs_interfaceid": "f43553d8-3872-4217-8259-57949e64eab2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 09:57:32 compute-1 nova_compute[230010]: 2025-11-24 09:57:32.814 230014 DEBUG nova.network.os_vif_util [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "f43553d8-3872-4217-8259-57949e64eab2", "address": "fa:16:3e:58:35:61", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf43553d8-38", "ovs_interfaceid": "f43553d8-3872-4217-8259-57949e64eab2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:57:32 compute-1 nova_compute[230010]: 2025-11-24 09:57:32.815 230014 DEBUG nova.network.os_vif_util [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:58:35:61,bridge_name='br-int',has_traffic_filtering=True,id=f43553d8-3872-4217-8259-57949e64eab2,network=Network(4a54e00b-2ddf-4829-be22-9a556b586781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf43553d8-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:57:32 compute-1 nova_compute[230010]: 2025-11-24 09:57:32.815 230014 DEBUG os_vif [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:35:61,bridge_name='br-int',has_traffic_filtering=True,id=f43553d8-3872-4217-8259-57949e64eab2,network=Network(4a54e00b-2ddf-4829-be22-9a556b586781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf43553d8-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 09:57:32 compute-1 nova_compute[230010]: 2025-11-24 09:57:32.819 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:32 compute-1 nova_compute[230010]: 2025-11-24 09:57:32.819 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf43553d8-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:57:32 compute-1 nova_compute[230010]: 2025-11-24 09:57:32.820 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:32 compute-1 nova_compute[230010]: 2025-11-24 09:57:32.822 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:32 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8-userdata-shm.mount: Deactivated successfully.
Nov 24 09:57:32 compute-1 nova_compute[230010]: 2025-11-24 09:57:32.825 230014 INFO os_vif [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:35:61,bridge_name='br-int',has_traffic_filtering=True,id=f43553d8-3872-4217-8259-57949e64eab2,network=Network(4a54e00b-2ddf-4829-be22-9a556b586781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf43553d8-38')
Nov 24 09:57:32 compute-1 systemd[1]: var-lib-containers-storage-overlay-cdb21f40013aa0967f3b24322d4423a75e6cb1995a2ccd777bff1c3622c2fa39-merged.mount: Deactivated successfully.
Nov 24 09:57:32 compute-1 podman[238878]: 2025-11-24 09:57:32.841158078 +0000 UTC m=+0.102423765 container cleanup 55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 24 09:57:32 compute-1 systemd[1]: libpod-conmon-55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8.scope: Deactivated successfully.
Nov 24 09:57:32 compute-1 podman[238933]: 2025-11-24 09:57:32.905350788 +0000 UTC m=+0.041102517 container remove 55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 24 09:57:32 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.911 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[454b0868-3e98-4eb3-813d-a36b96988214]: (4, ('Mon Nov 24 09:57:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781 (55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8)\n55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8\nMon Nov 24 09:57:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781 (55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8)\n55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:32 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.913 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[9f2d118c-f3b0-4fb9-a089-1e259df4dfaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:32 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.914 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a54e00b-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:57:32 compute-1 nova_compute[230010]: 2025-11-24 09:57:32.916 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:32 compute-1 kernel: tap4a54e00b-20: left promiscuous mode
Nov 24 09:57:32 compute-1 nova_compute[230010]: 2025-11-24 09:57:32.928 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:32 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.930 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[abd002cd-1e6d-4621-a994-26bcd449a770]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:32 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.946 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[1048c602-e26a-4233-a62e-63badce1a238]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:32 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.948 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[65b2ea96-30a3-441c-92e2-f09da9bb2c42]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:32 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.965 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[ad3a54b2-6987-478b-a58e-de8976c8900d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 417556, 'reachable_time': 27452, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238949, 'error': None, 'target': 'ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:32 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.969 142476 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 24 09:57:32 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.969 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[e2a8fbca-85de-4927-8975-64c1dd1ca98b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:32 compute-1 systemd[1]: run-netns-ovnmeta\x2d4a54e00b\x2d2ddf\x2d4829\x2dbe22\x2d9a556b586781.mount: Deactivated successfully.
Nov 24 09:57:33 compute-1 nova_compute[230010]: 2025-11-24 09:57:33.225 230014 INFO nova.virt.libvirt.driver [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Deleting instance files /var/lib/nova/instances/9558b085-fcfb-4cae-87bc-2840f81734fc_del
Nov 24 09:57:33 compute-1 nova_compute[230010]: 2025-11-24 09:57:33.226 230014 INFO nova.virt.libvirt.driver [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Deletion of /var/lib/nova/instances/9558b085-fcfb-4cae-87bc-2840f81734fc_del complete
Nov 24 09:57:33 compute-1 nova_compute[230010]: 2025-11-24 09:57:33.291 230014 INFO nova.compute.manager [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Took 0.73 seconds to destroy the instance on the hypervisor.
Nov 24 09:57:33 compute-1 nova_compute[230010]: 2025-11-24 09:57:33.292 230014 DEBUG oslo.service.loopingcall [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 09:57:33 compute-1 nova_compute[230010]: 2025-11-24 09:57:33.292 230014 DEBUG nova.compute.manager [-] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 09:57:33 compute-1 nova_compute[230010]: 2025-11-24 09:57:33.292 230014 DEBUG nova.network.neutron [-] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 09:57:33 compute-1 nova_compute[230010]: 2025-11-24 09:57:33.560 230014 DEBUG nova.compute.manager [req-f0f76990-9ff0-4d31-b21e-0bae51786a40 req-4a7cf2c3-f231-4ef0-bcb5-b466ec780f05 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Received event network-vif-unplugged-f43553d8-3872-4217-8259-57949e64eab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:57:33 compute-1 nova_compute[230010]: 2025-11-24 09:57:33.561 230014 DEBUG oslo_concurrency.lockutils [req-f0f76990-9ff0-4d31-b21e-0bae51786a40 req-4a7cf2c3-f231-4ef0-bcb5-b466ec780f05 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:33 compute-1 nova_compute[230010]: 2025-11-24 09:57:33.562 230014 DEBUG oslo_concurrency.lockutils [req-f0f76990-9ff0-4d31-b21e-0bae51786a40 req-4a7cf2c3-f231-4ef0-bcb5-b466ec780f05 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:33 compute-1 nova_compute[230010]: 2025-11-24 09:57:33.562 230014 DEBUG oslo_concurrency.lockutils [req-f0f76990-9ff0-4d31-b21e-0bae51786a40 req-4a7cf2c3-f231-4ef0-bcb5-b466ec780f05 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:33 compute-1 nova_compute[230010]: 2025-11-24 09:57:33.563 230014 DEBUG nova.compute.manager [req-f0f76990-9ff0-4d31-b21e-0bae51786a40 req-4a7cf2c3-f231-4ef0-bcb5-b466ec780f05 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] No waiting events found dispatching network-vif-unplugged-f43553d8-3872-4217-8259-57949e64eab2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 09:57:33 compute-1 nova_compute[230010]: 2025-11-24 09:57:33.563 230014 DEBUG nova.compute.manager [req-f0f76990-9ff0-4d31-b21e-0bae51786a40 req-4a7cf2c3-f231-4ef0-bcb5-b466ec780f05 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Received event network-vif-unplugged-f43553d8-3872-4217-8259-57949e64eab2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 24 09:57:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:34.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:34 compute-1 podman[238952]: 2025-11-24 09:57:34.346315895 +0000 UTC m=+0.074568984 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Nov 24 09:57:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:57:34 compute-1 ceph-mon[80009]: pgmap v921: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 24 09:57:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:34.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:34 compute-1 nova_compute[230010]: 2025-11-24 09:57:34.889 230014 DEBUG nova.network.neutron [-] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:57:34 compute-1 nova_compute[230010]: 2025-11-24 09:57:34.911 230014 INFO nova.compute.manager [-] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Took 1.62 seconds to deallocate network for instance.
Nov 24 09:57:34 compute-1 nova_compute[230010]: 2025-11-24 09:57:34.961 230014 DEBUG nova.compute.manager [req-52808065-8a09-495a-aee6-673c221df7cb req-4db2183a-538a-4da4-9e2b-205aa7f43c38 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Received event network-vif-deleted-f43553d8-3872-4217-8259-57949e64eab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:57:34 compute-1 nova_compute[230010]: 2025-11-24 09:57:34.987 230014 DEBUG oslo_concurrency.lockutils [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:34 compute-1 nova_compute[230010]: 2025-11-24 09:57:34.987 230014 DEBUG oslo_concurrency.lockutils [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:35 compute-1 nova_compute[230010]: 2025-11-24 09:57:35.318 230014 DEBUG oslo_concurrency.processutils [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:57:35 compute-1 nova_compute[230010]: 2025-11-24 09:57:35.636 230014 DEBUG nova.compute.manager [req-1527bd92-d70f-41b5-ad20-afbaaaff21d5 req-701e958c-38a3-4309-bdee-f53328c6d99b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Received event network-vif-plugged-f43553d8-3872-4217-8259-57949e64eab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:57:35 compute-1 nova_compute[230010]: 2025-11-24 09:57:35.636 230014 DEBUG oslo_concurrency.lockutils [req-1527bd92-d70f-41b5-ad20-afbaaaff21d5 req-701e958c-38a3-4309-bdee-f53328c6d99b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:35 compute-1 nova_compute[230010]: 2025-11-24 09:57:35.637 230014 DEBUG oslo_concurrency.lockutils [req-1527bd92-d70f-41b5-ad20-afbaaaff21d5 req-701e958c-38a3-4309-bdee-f53328c6d99b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:35 compute-1 nova_compute[230010]: 2025-11-24 09:57:35.637 230014 DEBUG oslo_concurrency.lockutils [req-1527bd92-d70f-41b5-ad20-afbaaaff21d5 req-701e958c-38a3-4309-bdee-f53328c6d99b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:35 compute-1 nova_compute[230010]: 2025-11-24 09:57:35.637 230014 DEBUG nova.compute.manager [req-1527bd92-d70f-41b5-ad20-afbaaaff21d5 req-701e958c-38a3-4309-bdee-f53328c6d99b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] No waiting events found dispatching network-vif-plugged-f43553d8-3872-4217-8259-57949e64eab2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 09:57:35 compute-1 nova_compute[230010]: 2025-11-24 09:57:35.637 230014 WARNING nova.compute.manager [req-1527bd92-d70f-41b5-ad20-afbaaaff21d5 req-701e958c-38a3-4309-bdee-f53328c6d99b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Received unexpected event network-vif-plugged-f43553d8-3872-4217-8259-57949e64eab2 for instance with vm_state deleted and task_state None.
Nov 24 09:57:35 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:57:35 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3443165567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:35 compute-1 nova_compute[230010]: 2025-11-24 09:57:35.746 230014 DEBUG oslo_concurrency.processutils [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:57:35 compute-1 nova_compute[230010]: 2025-11-24 09:57:35.751 230014 DEBUG nova.compute.provider_tree [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:57:35 compute-1 nova_compute[230010]: 2025-11-24 09:57:35.767 230014 DEBUG nova.scheduler.client.report [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:57:35 compute-1 nova_compute[230010]: 2025-11-24 09:57:35.791 230014 DEBUG oslo_concurrency.lockutils [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:35 compute-1 nova_compute[230010]: 2025-11-24 09:57:35.822 230014 INFO nova.scheduler.client.report [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Deleted allocations for instance 9558b085-fcfb-4cae-87bc-2840f81734fc
Nov 24 09:57:35 compute-1 nova_compute[230010]: 2025-11-24 09:57:35.872 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:35 compute-1 nova_compute[230010]: 2025-11-24 09:57:35.893 230014 DEBUG oslo_concurrency.lockutils [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:36.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:36 compute-1 ceph-mon[80009]: pgmap v922: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 24 09:57:36 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3443165567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:57:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:36.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:57:37 compute-1 nova_compute[230010]: 2025-11-24 09:57:37.875 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:38.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:38 compute-1 podman[238997]: 2025-11-24 09:57:38.356146326 +0000 UTC m=+0.098821958 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 24 09:57:38 compute-1 ceph-mon[80009]: pgmap v923: 353 pgs: 353 active+clean; 121 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 24 09:57:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:38.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:57:39 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2735909461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:40.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:40 compute-1 ceph-mon[80009]: pgmap v924: 353 pgs: 353 active+clean; 121 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Nov 24 09:57:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:40.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:40 compute-1 nova_compute[230010]: 2025-11-24 09:57:40.875 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:41 compute-1 ceph-mon[80009]: pgmap v925: 353 pgs: 353 active+clean; 121 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Nov 24 09:57:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:42.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:42.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:42 compute-1 nova_compute[230010]: 2025-11-24 09:57:42.878 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:42 compute-1 nova_compute[230010]: 2025-11-24 09:57:42.959 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:43 compute-1 nova_compute[230010]: 2025-11-24 09:57:43.032 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:44 compute-1 ceph-mon[80009]: pgmap v926: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 14 KiB/s wr, 57 op/s
Nov 24 09:57:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:57:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:44.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:57:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:57:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:44.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:57:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:57:45 compute-1 nova_compute[230010]: 2025-11-24 09:57:45.878 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:46 compute-1 ceph-mon[80009]: pgmap v927: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 14 KiB/s wr, 56 op/s
Nov 24 09:57:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:57:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:57:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:46.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:57:46 compute-1 podman[239029]: 2025-11-24 09:57:46.353443878 +0000 UTC m=+0.082162129 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 09:57:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:46.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:46 compute-1 sudo[239048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:57:46 compute-1 sudo[239048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:57:46 compute-1 sudo[239048]: pam_unix(sudo:session): session closed for user root
Nov 24 09:57:47 compute-1 nova_compute[230010]: 2025-11-24 09:57:47.798 230014 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763978252.7965646, 9558b085-fcfb-4cae-87bc-2840f81734fc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:57:47 compute-1 nova_compute[230010]: 2025-11-24 09:57:47.799 230014 INFO nova.compute.manager [-] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] VM Stopped (Lifecycle Event)
Nov 24 09:57:47 compute-1 nova_compute[230010]: 2025-11-24 09:57:47.817 230014 DEBUG nova.compute.manager [None req-9c4d198b-61bb-4163-9c0a-72987176e301 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:57:47 compute-1 nova_compute[230010]: 2025-11-24 09:57:47.881 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:48 compute-1 ceph-mon[80009]: pgmap v928: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 14 KiB/s wr, 57 op/s
Nov 24 09:57:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:48.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:57:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:48.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:57:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:57:50 compute-1 ceph-mon[80009]: pgmap v929: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 24 09:57:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:50.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:50.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:50 compute-1 nova_compute[230010]: 2025-11-24 09:57:50.904 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:52 compute-1 ceph-mon[80009]: pgmap v930: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 24 09:57:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:52.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:52.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:52 compute-1 nova_compute[230010]: 2025-11-24 09:57:52.885 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:54.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:54 compute-1 ceph-mon[80009]: pgmap v931: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Nov 24 09:57:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:57:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:54.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:55 compute-1 nova_compute[230010]: 2025-11-24 09:57:55.908 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:57:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:56.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:57:56 compute-1 ceph-mon[80009]: pgmap v932: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s
Nov 24 09:57:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:57:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:56.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:57:57 compute-1 nova_compute[230010]: 2025-11-24 09:57:57.183 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:57 compute-1 nova_compute[230010]: 2025-11-24 09:57:57.184 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:57 compute-1 nova_compute[230010]: 2025-11-24 09:57:57.196 230014 DEBUG nova.compute.manager [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 09:57:57 compute-1 nova_compute[230010]: 2025-11-24 09:57:57.255 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:57 compute-1 nova_compute[230010]: 2025-11-24 09:57:57.255 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:57 compute-1 nova_compute[230010]: 2025-11-24 09:57:57.263 230014 DEBUG nova.virt.hardware [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 09:57:57 compute-1 nova_compute[230010]: 2025-11-24 09:57:57.263 230014 INFO nova.compute.claims [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Claim successful on node compute-1.ctlplane.example.com
Nov 24 09:57:57 compute-1 nova_compute[230010]: 2025-11-24 09:57:57.350 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:57:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:57:57 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4049579656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:57 compute-1 nova_compute[230010]: 2025-11-24 09:57:57.818 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:57:57 compute-1 nova_compute[230010]: 2025-11-24 09:57:57.823 230014 DEBUG nova.compute.provider_tree [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:57:57 compute-1 nova_compute[230010]: 2025-11-24 09:57:57.836 230014 DEBUG nova.scheduler.client.report [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:57:57 compute-1 nova_compute[230010]: 2025-11-24 09:57:57.852 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:57 compute-1 nova_compute[230010]: 2025-11-24 09:57:57.853 230014 DEBUG nova.compute.manager [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 09:57:57 compute-1 nova_compute[230010]: 2025-11-24 09:57:57.888 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:57 compute-1 nova_compute[230010]: 2025-11-24 09:57:57.900 230014 DEBUG nova.compute.manager [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 09:57:57 compute-1 nova_compute[230010]: 2025-11-24 09:57:57.900 230014 DEBUG nova.network.neutron [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 09:57:57 compute-1 nova_compute[230010]: 2025-11-24 09:57:57.918 230014 INFO nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 09:57:57 compute-1 nova_compute[230010]: 2025-11-24 09:57:57.939 230014 DEBUG nova.compute.manager [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 09:57:58 compute-1 nova_compute[230010]: 2025-11-24 09:57:58.063 230014 DEBUG nova.compute.manager [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 09:57:58 compute-1 nova_compute[230010]: 2025-11-24 09:57:58.066 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 09:57:58 compute-1 nova_compute[230010]: 2025-11-24 09:57:58.066 230014 INFO nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Creating image(s)
Nov 24 09:57:58 compute-1 nova_compute[230010]: 2025-11-24 09:57:58.091 230014 DEBUG nova.storage.rbd_utils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:57:58 compute-1 nova_compute[230010]: 2025-11-24 09:57:58.117 230014 DEBUG nova.storage.rbd_utils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:57:58 compute-1 nova_compute[230010]: 2025-11-24 09:57:58.143 230014 DEBUG nova.storage.rbd_utils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:57:58 compute-1 nova_compute[230010]: 2025-11-24 09:57:58.147 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:57:58 compute-1 nova_compute[230010]: 2025-11-24 09:57:58.234 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:57:58 compute-1 nova_compute[230010]: 2025-11-24 09:57:58.235 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "2ed5c667523487159c4c4503c82babbc95dbae40" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:58 compute-1 nova_compute[230010]: 2025-11-24 09:57:58.236 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:58 compute-1 nova_compute[230010]: 2025-11-24 09:57:58.236 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:58.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:58 compute-1 nova_compute[230010]: 2025-11-24 09:57:58.259 230014 DEBUG nova.storage.rbd_utils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:57:58 compute-1 nova_compute[230010]: 2025-11-24 09:57:58.263 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:57:58 compute-1 ceph-mon[80009]: pgmap v933: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Nov 24 09:57:58 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/4049579656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:58 compute-1 nova_compute[230010]: 2025-11-24 09:57:58.384 230014 DEBUG nova.policy [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '43f79ff3105e4372a3c095e8057d4f1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '94d069fc040647d5a6e54894eec915fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 09:57:58 compute-1 nova_compute[230010]: 2025-11-24 09:57:58.519 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.256s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:57:58 compute-1 nova_compute[230010]: 2025-11-24 09:57:58.589 230014 DEBUG nova.storage.rbd_utils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] resizing rbd image 62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 24 09:57:58 compute-1 nova_compute[230010]: 2025-11-24 09:57:58.684 230014 DEBUG nova.objects.instance [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'migration_context' on Instance uuid 62465e3c-a372-4121-8a2e-5e10d1c3faf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:57:58 compute-1 nova_compute[230010]: 2025-11-24 09:57:58.695 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 09:57:58 compute-1 nova_compute[230010]: 2025-11-24 09:57:58.696 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Ensure instance console log exists: /var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 09:57:58 compute-1 nova_compute[230010]: 2025-11-24 09:57:58.696 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:58 compute-1 nova_compute[230010]: 2025-11-24 09:57:58.696 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:58 compute-1 nova_compute[230010]: 2025-11-24 09:57:58.697 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:57:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:58.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:58 compute-1 nova_compute[230010]: 2025-11-24 09:57:58.940 230014 DEBUG nova.network.neutron [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Successfully created port: bf41c673-482b-42e3-ac98-475b716fa0e9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 24 09:57:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:57:59 compute-1 ceph-mon[80009]: pgmap v934: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 0 B/s wr, 170 op/s
Nov 24 09:57:59 compute-1 nova_compute[230010]: 2025-11-24 09:57:59.875 230014 DEBUG nova.network.neutron [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Successfully updated port: bf41c673-482b-42e3-ac98-475b716fa0e9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 09:57:59 compute-1 nova_compute[230010]: 2025-11-24 09:57:59.888 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:57:59 compute-1 nova_compute[230010]: 2025-11-24 09:57:59.888 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:57:59 compute-1 nova_compute[230010]: 2025-11-24 09:57:59.889 230014 DEBUG nova.network.neutron [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 09:57:59 compute-1 nova_compute[230010]: 2025-11-24 09:57:59.952 230014 DEBUG nova.compute.manager [req-d08486ba-a9f0-42e9-96e5-b658263b9f30 req-ea013464-d611-4964-b567-14d3c19cb126 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-changed-bf41c673-482b-42e3-ac98-475b716fa0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:57:59 compute-1 nova_compute[230010]: 2025-11-24 09:57:59.952 230014 DEBUG nova.compute.manager [req-d08486ba-a9f0-42e9-96e5-b658263b9f30 req-ea013464-d611-4964-b567-14d3c19cb126 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Refreshing instance network info cache due to event network-changed-bf41c673-482b-42e3-ac98-475b716fa0e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 09:57:59 compute-1 nova_compute[230010]: 2025-11-24 09:57:59.953 230014 DEBUG oslo_concurrency.lockutils [req-d08486ba-a9f0-42e9-96e5-b658263b9f30 req-ea013464-d611-4964-b567-14d3c19cb126 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:58:00 compute-1 nova_compute[230010]: 2025-11-24 09:58:00.010 230014 DEBUG nova.network.neutron [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 09:58:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:58:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:00.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:58:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:58:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:58:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:58:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:58:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:00.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:58:00 compute-1 nova_compute[230010]: 2025-11-24 09:58:00.945 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:01 compute-1 ceph-mon[80009]: pgmap v935: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 0 B/s wr, 170 op/s
Nov 24 09:58:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:02.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:02 compute-1 nova_compute[230010]: 2025-11-24 09:58:02.612 230014 DEBUG nova.network.neutron [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updating instance_info_cache with network_info: [{"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:58:02 compute-1 nova_compute[230010]: 2025-11-24 09:58:02.630 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:58:02 compute-1 nova_compute[230010]: 2025-11-24 09:58:02.631 230014 DEBUG nova.compute.manager [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Instance network_info: |[{"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 09:58:02 compute-1 nova_compute[230010]: 2025-11-24 09:58:02.631 230014 DEBUG oslo_concurrency.lockutils [req-d08486ba-a9f0-42e9-96e5-b658263b9f30 req-ea013464-d611-4964-b567-14d3c19cb126 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:58:02 compute-1 nova_compute[230010]: 2025-11-24 09:58:02.632 230014 DEBUG nova.network.neutron [req-d08486ba-a9f0-42e9-96e5-b658263b9f30 req-ea013464-d611-4964-b567-14d3c19cb126 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Refreshing network info cache for port bf41c673-482b-42e3-ac98-475b716fa0e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 09:58:02 compute-1 nova_compute[230010]: 2025-11-24 09:58:02.635 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Start _get_guest_xml network_info=[{"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '6ef14bdf-4f04-4400-8040-4409d9d5271e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 09:58:02 compute-1 nova_compute[230010]: 2025-11-24 09:58:02.641 230014 WARNING nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:58:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/1301144539' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:58:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/1301144539' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:58:02 compute-1 nova_compute[230010]: 2025-11-24 09:58:02.646 230014 DEBUG nova.virt.libvirt.host [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 09:58:02 compute-1 nova_compute[230010]: 2025-11-24 09:58:02.647 230014 DEBUG nova.virt.libvirt.host [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 09:58:02 compute-1 nova_compute[230010]: 2025-11-24 09:58:02.652 230014 DEBUG nova.virt.libvirt.host [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 09:58:02 compute-1 nova_compute[230010]: 2025-11-24 09:58:02.653 230014 DEBUG nova.virt.libvirt.host [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 09:58:02 compute-1 nova_compute[230010]: 2025-11-24 09:58:02.653 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 09:58:02 compute-1 nova_compute[230010]: 2025-11-24 09:58:02.653 230014 DEBUG nova.virt.hardware [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T09:52:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='4a5d03ad-925b-45f1-89bd-f1325f9f3292',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 09:58:02 compute-1 nova_compute[230010]: 2025-11-24 09:58:02.654 230014 DEBUG nova.virt.hardware [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 09:58:02 compute-1 nova_compute[230010]: 2025-11-24 09:58:02.654 230014 DEBUG nova.virt.hardware [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 09:58:02 compute-1 nova_compute[230010]: 2025-11-24 09:58:02.654 230014 DEBUG nova.virt.hardware [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 09:58:02 compute-1 nova_compute[230010]: 2025-11-24 09:58:02.654 230014 DEBUG nova.virt.hardware [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 09:58:02 compute-1 nova_compute[230010]: 2025-11-24 09:58:02.655 230014 DEBUG nova.virt.hardware [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 09:58:02 compute-1 nova_compute[230010]: 2025-11-24 09:58:02.655 230014 DEBUG nova.virt.hardware [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 09:58:02 compute-1 nova_compute[230010]: 2025-11-24 09:58:02.655 230014 DEBUG nova.virt.hardware [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 09:58:02 compute-1 nova_compute[230010]: 2025-11-24 09:58:02.655 230014 DEBUG nova.virt.hardware [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 09:58:02 compute-1 nova_compute[230010]: 2025-11-24 09:58:02.656 230014 DEBUG nova.virt.hardware [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 09:58:02 compute-1 nova_compute[230010]: 2025-11-24 09:58:02.656 230014 DEBUG nova.virt.hardware [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 09:58:02 compute-1 nova_compute[230010]: 2025-11-24 09:58:02.658 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:58:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:02.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:02 compute-1 nova_compute[230010]: 2025-11-24 09:58:02.892 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:03 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 09:58:03 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4014279327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.102 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.130 230014 DEBUG nova.storage.rbd_utils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.134 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:58:03 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 09:58:03 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/625293257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.561 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.563 230014 DEBUG nova.virt.libvirt.vif [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T09:57:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1468987490',display_name='tempest-TestNetworkBasicOps-server-1468987490',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1468987490',id=6,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeLeNtgMDECCA396nl5/z6TsnAPH3kX9ECWzaWuLvptXvMaJaj/WlHKUFyFRR30PurvGrDvNN2g1Ij1pTu0Su2H0Am0Z6Y5TdOjAAQXOQr2HISwvDDFzD9t0aaelZEbhw==',key_name='tempest-TestNetworkBasicOps-1307688110',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-sy1yuug7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T09:57:57Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=62465e3c-a372-4121-8a2e-5e10d1c3faf6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.563 230014 DEBUG nova.network.os_vif_util [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.564 230014 DEBUG nova.network.os_vif_util [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:a7:ce,bridge_name='br-int',has_traffic_filtering=True,id=bf41c673-482b-42e3-ac98-475b716fa0e9,network=Network(81f18750-9169-4587-b6ca-88a2bbc58afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf41c673-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.565 230014 DEBUG nova.objects.instance [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'pci_devices' on Instance uuid 62465e3c-a372-4121-8a2e-5e10d1c3faf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.581 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] End _get_guest_xml xml=<domain type="kvm">
Nov 24 09:58:03 compute-1 nova_compute[230010]:   <uuid>62465e3c-a372-4121-8a2e-5e10d1c3faf6</uuid>
Nov 24 09:58:03 compute-1 nova_compute[230010]:   <name>instance-00000006</name>
Nov 24 09:58:03 compute-1 nova_compute[230010]:   <memory>131072</memory>
Nov 24 09:58:03 compute-1 nova_compute[230010]:   <vcpu>1</vcpu>
Nov 24 09:58:03 compute-1 nova_compute[230010]:   <metadata>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <nova:name>tempest-TestNetworkBasicOps-server-1468987490</nova:name>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <nova:creationTime>2025-11-24 09:58:02</nova:creationTime>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <nova:flavor name="m1.nano">
Nov 24 09:58:03 compute-1 nova_compute[230010]:         <nova:memory>128</nova:memory>
Nov 24 09:58:03 compute-1 nova_compute[230010]:         <nova:disk>1</nova:disk>
Nov 24 09:58:03 compute-1 nova_compute[230010]:         <nova:swap>0</nova:swap>
Nov 24 09:58:03 compute-1 nova_compute[230010]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 09:58:03 compute-1 nova_compute[230010]:         <nova:vcpus>1</nova:vcpus>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       </nova:flavor>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <nova:owner>
Nov 24 09:58:03 compute-1 nova_compute[230010]:         <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 09:58:03 compute-1 nova_compute[230010]:         <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       </nova:owner>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <nova:ports>
Nov 24 09:58:03 compute-1 nova_compute[230010]:         <nova:port uuid="bf41c673-482b-42e3-ac98-475b716fa0e9">
Nov 24 09:58:03 compute-1 nova_compute[230010]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:         </nova:port>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       </nova:ports>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     </nova:instance>
Nov 24 09:58:03 compute-1 nova_compute[230010]:   </metadata>
Nov 24 09:58:03 compute-1 nova_compute[230010]:   <sysinfo type="smbios">
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <system>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <entry name="manufacturer">RDO</entry>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <entry name="product">OpenStack Compute</entry>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <entry name="serial">62465e3c-a372-4121-8a2e-5e10d1c3faf6</entry>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <entry name="uuid">62465e3c-a372-4121-8a2e-5e10d1c3faf6</entry>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <entry name="family">Virtual Machine</entry>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     </system>
Nov 24 09:58:03 compute-1 nova_compute[230010]:   </sysinfo>
Nov 24 09:58:03 compute-1 nova_compute[230010]:   <os>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <boot dev="hd"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <smbios mode="sysinfo"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:   </os>
Nov 24 09:58:03 compute-1 nova_compute[230010]:   <features>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <acpi/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <apic/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <vmcoreinfo/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:   </features>
Nov 24 09:58:03 compute-1 nova_compute[230010]:   <clock offset="utc">
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <timer name="hpet" present="no"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:   </clock>
Nov 24 09:58:03 compute-1 nova_compute[230010]:   <cpu mode="host-model" match="exact">
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:   </cpu>
Nov 24 09:58:03 compute-1 nova_compute[230010]:   <devices>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <disk type="network" device="disk">
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <driver type="raw" cache="none"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <source protocol="rbd" name="vms/62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk">
Nov 24 09:58:03 compute-1 nova_compute[230010]:         <host name="192.168.122.100" port="6789"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:         <host name="192.168.122.102" port="6789"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:         <host name="192.168.122.101" port="6789"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       </source>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <auth username="openstack">
Nov 24 09:58:03 compute-1 nova_compute[230010]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       </auth>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <target dev="vda" bus="virtio"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     </disk>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <disk type="network" device="cdrom">
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <driver type="raw" cache="none"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <source protocol="rbd" name="vms/62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk.config">
Nov 24 09:58:03 compute-1 nova_compute[230010]:         <host name="192.168.122.100" port="6789"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:         <host name="192.168.122.102" port="6789"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:         <host name="192.168.122.101" port="6789"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       </source>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <auth username="openstack">
Nov 24 09:58:03 compute-1 nova_compute[230010]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       </auth>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <target dev="sda" bus="sata"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     </disk>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <interface type="ethernet">
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <mac address="fa:16:3e:99:a7:ce"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <model type="virtio"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <mtu size="1442"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <target dev="tapbf41c673-48"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     </interface>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <serial type="pty">
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <log file="/var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/console.log" append="off"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     </serial>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <video>
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <model type="virtio"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     </video>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <input type="tablet" bus="usb"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <rng model="virtio">
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <backend model="random">/dev/urandom</backend>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     </rng>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <controller type="usb" index="0"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     <memballoon model="virtio">
Nov 24 09:58:03 compute-1 nova_compute[230010]:       <stats period="10"/>
Nov 24 09:58:03 compute-1 nova_compute[230010]:     </memballoon>
Nov 24 09:58:03 compute-1 nova_compute[230010]:   </devices>
Nov 24 09:58:03 compute-1 nova_compute[230010]: </domain>
Nov 24 09:58:03 compute-1 nova_compute[230010]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.583 230014 DEBUG nova.compute.manager [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Preparing to wait for external event network-vif-plugged-bf41c673-482b-42e3-ac98-475b716fa0e9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.583 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.583 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.583 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.584 230014 DEBUG nova.virt.libvirt.vif [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T09:57:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1468987490',display_name='tempest-TestNetworkBasicOps-server-1468987490',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1468987490',id=6,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeLeNtgMDECCA396nl5/z6TsnAPH3kX9ECWzaWuLvptXvMaJaj/WlHKUFyFRR30PurvGrDvNN2g1Ij1pTu0Su2H0Am0Z6Y5TdOjAAQXOQr2HISwvDDFzD9t0aaelZEbhw==',key_name='tempest-TestNetworkBasicOps-1307688110',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-sy1yuug7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T09:57:57Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=62465e3c-a372-4121-8a2e-5e10d1c3faf6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.584 230014 DEBUG nova.network.os_vif_util [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.585 230014 DEBUG nova.network.os_vif_util [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:a7:ce,bridge_name='br-int',has_traffic_filtering=True,id=bf41c673-482b-42e3-ac98-475b716fa0e9,network=Network(81f18750-9169-4587-b6ca-88a2bbc58afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf41c673-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.585 230014 DEBUG os_vif [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:a7:ce,bridge_name='br-int',has_traffic_filtering=True,id=bf41c673-482b-42e3-ac98-475b716fa0e9,network=Network(81f18750-9169-4587-b6ca-88a2bbc58afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf41c673-48') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.586 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.586 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.586 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.588 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.589 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf41c673-48, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.589 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf41c673-48, col_values=(('external_ids', {'iface-id': 'bf41c673-482b-42e3-ac98-475b716fa0e9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:a7:ce', 'vm-uuid': '62465e3c-a372-4121-8a2e-5e10d1c3faf6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.590 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:03 compute-1 NetworkManager[48870]: <info>  [1763978283.5917] manager: (tapbf41c673-48): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.592 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.596 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.597 230014 INFO os_vif [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:a7:ce,bridge_name='br-int',has_traffic_filtering=True,id=bf41c673-482b-42e3-ac98-475b716fa0e9,network=Network(81f18750-9169-4587-b6ca-88a2bbc58afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf41c673-48')
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.651 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.652 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.652 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:99:a7:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.653 230014 INFO nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Using config drive
Nov 24 09:58:03 compute-1 ceph-mon[80009]: pgmap v936: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 1.8 MiB/s wr, 198 op/s
Nov 24 09:58:03 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/4014279327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:58:03 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/625293257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.680 230014 DEBUG nova.storage.rbd_utils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.896 230014 DEBUG nova.network.neutron [req-d08486ba-a9f0-42e9-96e5-b658263b9f30 req-ea013464-d611-4964-b567-14d3c19cb126 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updated VIF entry in instance network info cache for port bf41c673-482b-42e3-ac98-475b716fa0e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.897 230014 DEBUG nova.network.neutron [req-d08486ba-a9f0-42e9-96e5-b658263b9f30 req-ea013464-d611-4964-b567-14d3c19cb126 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updating instance_info_cache with network_info: [{"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.912 230014 DEBUG oslo_concurrency.lockutils [req-d08486ba-a9f0-42e9-96e5-b658263b9f30 req-ea013464-d611-4964-b567-14d3c19cb126 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.987 230014 INFO nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Creating config drive at /var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/disk.config
Nov 24 09:58:03 compute-1 nova_compute[230010]: 2025-11-24 09:58:03.993 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn6pcbvzx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:58:04 compute-1 nova_compute[230010]: 2025-11-24 09:58:04.127 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn6pcbvzx" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:58:04 compute-1 nova_compute[230010]: 2025-11-24 09:58:04.159 230014 DEBUG nova.storage.rbd_utils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:58:04 compute-1 nova_compute[230010]: 2025-11-24 09:58:04.164 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/disk.config 62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:58:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:58:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:04.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:58:04 compute-1 nova_compute[230010]: 2025-11-24 09:58:04.340 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/disk.config 62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:58:04 compute-1 nova_compute[230010]: 2025-11-24 09:58:04.341 230014 INFO nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Deleting local config drive /var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/disk.config because it was imported into RBD.
Nov 24 09:58:04 compute-1 kernel: tapbf41c673-48: entered promiscuous mode
Nov 24 09:58:04 compute-1 NetworkManager[48870]: <info>  [1763978284.3822] manager: (tapbf41c673-48): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Nov 24 09:58:04 compute-1 ovn_controller[132966]: 2025-11-24T09:58:04Z|00066|binding|INFO|Claiming lport bf41c673-482b-42e3-ac98-475b716fa0e9 for this chassis.
Nov 24 09:58:04 compute-1 ovn_controller[132966]: 2025-11-24T09:58:04Z|00067|binding|INFO|bf41c673-482b-42e3-ac98-475b716fa0e9: Claiming fa:16:3e:99:a7:ce 10.100.0.8
Nov 24 09:58:04 compute-1 nova_compute[230010]: 2025-11-24 09:58:04.436 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:58:04 compute-1 nova_compute[230010]: 2025-11-24 09:58:04.442 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.451 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:a7:ce 10.100.0.8'], port_security=['fa:16:3e:99:a7:ce 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '62465e3c-a372-4121-8a2e-5e10d1c3faf6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81f18750-9169-4587-b6ca-88a2bbc58afc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ebde3e26-b896-444f-b8ef-f2f39010ba47', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42f28b30-955e-4ea5-b415-d62763a6e220, chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=bf41c673-482b-42e3-ac98-475b716fa0e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.452 142336 INFO neutron.agent.ovn.metadata.agent [-] Port bf41c673-482b-42e3-ac98-475b716fa0e9 in datapath 81f18750-9169-4587-b6ca-88a2bbc58afc bound to our chassis
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.453 142336 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 81f18750-9169-4587-b6ca-88a2bbc58afc
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.465 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[487e462d-dc3f-473b-91b9-e5580eacdad0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.466 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap81f18750-91 in ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.468 234803 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap81f18750-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.468 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[609a74b1-009a-41b5-8cc0-2d7d6c72985c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.469 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[98e90ae9-e8f2-48c0-a394-91b4584b3700]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:04 compute-1 systemd-machined[193537]: New machine qemu-4-instance-00000006.
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.482 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[8b4001d2-1a23-432d-8a9a-fb685f9cb416]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:04 compute-1 ovn_controller[132966]: 2025-11-24T09:58:04Z|00068|binding|INFO|Setting lport bf41c673-482b-42e3-ac98-475b716fa0e9 ovn-installed in OVS
Nov 24 09:58:04 compute-1 ovn_controller[132966]: 2025-11-24T09:58:04Z|00069|binding|INFO|Setting lport bf41c673-482b-42e3-ac98-475b716fa0e9 up in Southbound
Nov 24 09:58:04 compute-1 systemd[1]: Started Virtual Machine qemu-4-instance-00000006.
Nov 24 09:58:04 compute-1 nova_compute[230010]: 2025-11-24 09:58:04.501 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.507 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[dfc58bdc-5c92-48d7-8d34-365bb9efb86c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:04 compute-1 systemd-udevd[239423]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 09:58:04 compute-1 NetworkManager[48870]: <info>  [1763978284.5264] device (tapbf41c673-48): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 09:58:04 compute-1 NetworkManager[48870]: <info>  [1763978284.5277] device (tapbf41c673-48): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 09:58:04 compute-1 podman[239402]: 2025-11-24 09:58:04.538345294 +0000 UTC m=+0.068067315 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.538 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[e815618a-94be-4d9a-865c-dc35136217a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:04 compute-1 NetworkManager[48870]: <info>  [1763978284.5441] manager: (tap81f18750-90): new Veth device (/org/freedesktop/NetworkManager/Devices/49)
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.545 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[1a38550e-f539-49e2-ad08-d4f17679f96e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.576 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[92efd3df-26e2-4b77-96c7-a3c14b510e04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.579 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[5039a9d5-d26e-48c5-b339-789d203d9b8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:04 compute-1 NetworkManager[48870]: <info>  [1763978284.5996] device (tap81f18750-90): carrier: link connected
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.605 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c6b9fd-5f16-4e54-bfb7-09701a2263b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.620 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[94038882-12a9-4f97-99c7-6d8da8f42dea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81f18750-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:24:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422815, 'reachable_time': 18725, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239456, 'error': None, 'target': 'ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.633 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[f00552f6-1557-449e-bd18-340712e4ab53]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1f:24db'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 422815, 'tstamp': 422815}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239457, 'error': None, 'target': 'ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.647 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[93726689-2844-472d-95ee-b26495b13a99]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81f18750-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:24:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422815, 'reachable_time': 18725, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 239458, 'error': None, 'target': 'ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.670 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[483b5e09-0540-4318-b200-c11bc847f1bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.728 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[01d26195-b418-46ea-b93f-fe2d3813f9be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.729 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81f18750-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.730 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.730 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81f18750-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:58:04 compute-1 kernel: tap81f18750-90: entered promiscuous mode
Nov 24 09:58:04 compute-1 NetworkManager[48870]: <info>  [1763978284.7328] manager: (tap81f18750-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Nov 24 09:58:04 compute-1 nova_compute[230010]: 2025-11-24 09:58:04.732 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:04 compute-1 nova_compute[230010]: 2025-11-24 09:58:04.734 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.737 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap81f18750-90, col_values=(('external_ids', {'iface-id': '51ab5aa5-77bf-4bb7-993e-d15c7b4540ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:58:04 compute-1 ovn_controller[132966]: 2025-11-24T09:58:04Z|00070|binding|INFO|Releasing lport 51ab5aa5-77bf-4bb7-993e-d15c7b4540ff from this chassis (sb_readonly=0)
Nov 24 09:58:04 compute-1 nova_compute[230010]: 2025-11-24 09:58:04.738 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:04 compute-1 nova_compute[230010]: 2025-11-24 09:58:04.763 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:04 compute-1 nova_compute[230010]: 2025-11-24 09:58:04.767 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.767 142336 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/81f18750-9169-4587-b6ca-88a2bbc58afc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/81f18750-9169-4587-b6ca-88a2bbc58afc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.768 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[a1b74c9a-3f22-4456-ac9c-605f898b2304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.769 142336 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: global
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]:     log         /dev/log local0 debug
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]:     log-tag     haproxy-metadata-proxy-81f18750-9169-4587-b6ca-88a2bbc58afc
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]:     user        root
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]:     group       root
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]:     maxconn     1024
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]:     pidfile     /var/lib/neutron/external/pids/81f18750-9169-4587-b6ca-88a2bbc58afc.pid.haproxy
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]:     daemon
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: defaults
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]:     log global
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]:     mode http
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]:     option httplog
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]:     option dontlognull
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]:     option http-server-close
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]:     option forwardfor
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]:     retries                 3
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]:     timeout http-request    30s
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]:     timeout connect         30s
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]:     timeout client          32s
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]:     timeout server          32s
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]:     timeout http-keep-alive 30s
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: listen listener
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]:     bind 169.254.169.254:80
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]:     server metadata /var/lib/neutron/metadata_proxy
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]:     http-request add-header X-OVN-Network-ID 81f18750-9169-4587-b6ca-88a2bbc58afc
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 24 09:58:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.770 142336 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc', 'env', 'PROCESS_TAG=haproxy-81f18750-9169-4587-b6ca-88a2bbc58afc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/81f18750-9169-4587-b6ca-88a2bbc58afc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 24 09:58:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:04.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.068 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978285.067941, 62465e3c-a372-4121-8a2e-5e10d1c3faf6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.069 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] VM Started (Lifecycle Event)
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.095 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.099 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978285.068107, 62465e3c-a372-4121-8a2e-5e10d1c3faf6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.099 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] VM Paused (Lifecycle Event)
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.114 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.117 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 09:58:05 compute-1 podman[239532]: 2025-11-24 09:58:05.137486691 +0000 UTC m=+0.050309831 container create 312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.139 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 09:58:05 compute-1 systemd[1]: Started libpod-conmon-312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e.scope.
Nov 24 09:58:05 compute-1 podman[239532]: 2025-11-24 09:58:05.111222619 +0000 UTC m=+0.024045779 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 09:58:05 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:58:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8077583c93206f4b50fb98a5f2ccb3fea2a970b30dff429250e8ff4a1f0a34dc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 09:58:05 compute-1 podman[239532]: 2025-11-24 09:58:05.230301591 +0000 UTC m=+0.143124751 container init 312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 09:58:05 compute-1 podman[239532]: 2025-11-24 09:58:05.235362164 +0000 UTC m=+0.148185304 container start 312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 24 09:58:05 compute-1 neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc[239548]: [NOTICE]   (239552) : New worker (239554) forked
Nov 24 09:58:05 compute-1 neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc[239548]: [NOTICE]   (239552) : Loading success.
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.514 230014 DEBUG nova.compute.manager [req-1c718bb5-4cb2-4be2-9fe3-45fbcf486c3f req-d58c3083-3677-4362-b737-56f9f296dd16 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-vif-plugged-bf41c673-482b-42e3-ac98-475b716fa0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.515 230014 DEBUG oslo_concurrency.lockutils [req-1c718bb5-4cb2-4be2-9fe3-45fbcf486c3f req-d58c3083-3677-4362-b737-56f9f296dd16 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.515 230014 DEBUG oslo_concurrency.lockutils [req-1c718bb5-4cb2-4be2-9fe3-45fbcf486c3f req-d58c3083-3677-4362-b737-56f9f296dd16 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.515 230014 DEBUG oslo_concurrency.lockutils [req-1c718bb5-4cb2-4be2-9fe3-45fbcf486c3f req-d58c3083-3677-4362-b737-56f9f296dd16 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.515 230014 DEBUG nova.compute.manager [req-1c718bb5-4cb2-4be2-9fe3-45fbcf486c3f req-d58c3083-3677-4362-b737-56f9f296dd16 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Processing event network-vif-plugged-bf41c673-482b-42e3-ac98-475b716fa0e9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.516 230014 DEBUG nova.compute.manager [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.520 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978285.519798, 62465e3c-a372-4121-8a2e-5e10d1c3faf6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.520 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] VM Resumed (Lifecycle Event)
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.522 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.526 230014 INFO nova.virt.libvirt.driver [-] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Instance spawned successfully.
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.526 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.541 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.546 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.548 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.549 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.549 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.550 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.550 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.550 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.571 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.597 230014 INFO nova.compute.manager [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Took 7.53 seconds to spawn the instance on the hypervisor.
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.597 230014 DEBUG nova.compute.manager [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.651 230014 INFO nova.compute.manager [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Took 8.42 seconds to build instance.
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.664 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.480s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:58:05 compute-1 nova_compute[230010]: 2025-11-24 09:58:05.947 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:05 compute-1 ceph-mon[80009]: pgmap v937: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 120 KiB/s rd, 1.8 MiB/s wr, 197 op/s
Nov 24 09:58:06 compute-1 nova_compute[230010]: 2025-11-24 09:58:06.163 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:58:06 compute-1 nova_compute[230010]: 2025-11-24 09:58:06.188 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Triggering sync for uuid 62465e3c-a372-4121-8a2e-5e10d1c3faf6 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 24 09:58:06 compute-1 nova_compute[230010]: 2025-11-24 09:58:06.189 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:58:06 compute-1 nova_compute[230010]: 2025-11-24 09:58:06.190 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:58:06 compute-1 nova_compute[230010]: 2025-11-24 09:58:06.230 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:58:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:06.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:06.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:06 compute-1 sudo[239564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:58:06 compute-1 sudo[239564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:58:06 compute-1 sudo[239564]: pam_unix(sudo:session): session closed for user root
Nov 24 09:58:07 compute-1 nova_compute[230010]: 2025-11-24 09:58:07.590 230014 DEBUG nova.compute.manager [req-14bf6bcd-acf3-446a-bd36-987f7a7b2276 req-32ca1e3b-127e-4c3a-9e58-985ba621bdec 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-vif-plugged-bf41c673-482b-42e3-ac98-475b716fa0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:58:07 compute-1 nova_compute[230010]: 2025-11-24 09:58:07.591 230014 DEBUG oslo_concurrency.lockutils [req-14bf6bcd-acf3-446a-bd36-987f7a7b2276 req-32ca1e3b-127e-4c3a-9e58-985ba621bdec 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:58:07 compute-1 nova_compute[230010]: 2025-11-24 09:58:07.592 230014 DEBUG oslo_concurrency.lockutils [req-14bf6bcd-acf3-446a-bd36-987f7a7b2276 req-32ca1e3b-127e-4c3a-9e58-985ba621bdec 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:58:07 compute-1 nova_compute[230010]: 2025-11-24 09:58:07.592 230014 DEBUG oslo_concurrency.lockutils [req-14bf6bcd-acf3-446a-bd36-987f7a7b2276 req-32ca1e3b-127e-4c3a-9e58-985ba621bdec 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:58:07 compute-1 nova_compute[230010]: 2025-11-24 09:58:07.592 230014 DEBUG nova.compute.manager [req-14bf6bcd-acf3-446a-bd36-987f7a7b2276 req-32ca1e3b-127e-4c3a-9e58-985ba621bdec 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] No waiting events found dispatching network-vif-plugged-bf41c673-482b-42e3-ac98-475b716fa0e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 09:58:07 compute-1 nova_compute[230010]: 2025-11-24 09:58:07.593 230014 WARNING nova.compute.manager [req-14bf6bcd-acf3-446a-bd36-987f7a7b2276 req-32ca1e3b-127e-4c3a-9e58-985ba621bdec 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received unexpected event network-vif-plugged-bf41c673-482b-42e3-ac98-475b716fa0e9 for instance with vm_state active and task_state None.
Nov 24 09:58:08 compute-1 ceph-mon[80009]: pgmap v938: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1014 KiB/s rd, 1.8 MiB/s wr, 237 op/s
Nov 24 09:58:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:08.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:08 compute-1 nova_compute[230010]: 2025-11-24 09:58:08.590 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:08.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:09 compute-1 podman[239590]: 2025-11-24 09:58:09.413794416 +0000 UTC m=+0.134128439 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 24 09:58:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:58:10 compute-1 ceph-mon[80009]: pgmap v939: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 912 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Nov 24 09:58:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:10.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:10.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:10 compute-1 nova_compute[230010]: 2025-11-24 09:58:10.950 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:11 compute-1 NetworkManager[48870]: <info>  [1763978291.4006] manager: (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Nov 24 09:58:11 compute-1 nova_compute[230010]: 2025-11-24 09:58:11.399 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:11 compute-1 NetworkManager[48870]: <info>  [1763978291.4016] manager: (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Nov 24 09:58:11 compute-1 ovn_controller[132966]: 2025-11-24T09:58:11Z|00071|binding|INFO|Releasing lport 51ab5aa5-77bf-4bb7-993e-d15c7b4540ff from this chassis (sb_readonly=0)
Nov 24 09:58:11 compute-1 ovn_controller[132966]: 2025-11-24T09:58:11Z|00072|binding|INFO|Releasing lport 51ab5aa5-77bf-4bb7-993e-d15c7b4540ff from this chassis (sb_readonly=0)
Nov 24 09:58:11 compute-1 nova_compute[230010]: 2025-11-24 09:58:11.433 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:11 compute-1 nova_compute[230010]: 2025-11-24 09:58:11.438 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:11 compute-1 nova_compute[230010]: 2025-11-24 09:58:11.815 230014 DEBUG nova.compute.manager [req-09924f42-779e-4c6e-a5fd-6e6bdfdd7a36 req-eb0c8d47-516b-4ae9-a3d4-ab4367a2fc76 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-changed-bf41c673-482b-42e3-ac98-475b716fa0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:58:11 compute-1 nova_compute[230010]: 2025-11-24 09:58:11.815 230014 DEBUG nova.compute.manager [req-09924f42-779e-4c6e-a5fd-6e6bdfdd7a36 req-eb0c8d47-516b-4ae9-a3d4-ab4367a2fc76 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Refreshing instance network info cache due to event network-changed-bf41c673-482b-42e3-ac98-475b716fa0e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 09:58:11 compute-1 nova_compute[230010]: 2025-11-24 09:58:11.816 230014 DEBUG oslo_concurrency.lockutils [req-09924f42-779e-4c6e-a5fd-6e6bdfdd7a36 req-eb0c8d47-516b-4ae9-a3d4-ab4367a2fc76 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:58:11 compute-1 nova_compute[230010]: 2025-11-24 09:58:11.816 230014 DEBUG oslo_concurrency.lockutils [req-09924f42-779e-4c6e-a5fd-6e6bdfdd7a36 req-eb0c8d47-516b-4ae9-a3d4-ab4367a2fc76 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:58:11 compute-1 nova_compute[230010]: 2025-11-24 09:58:11.816 230014 DEBUG nova.network.neutron [req-09924f42-779e-4c6e-a5fd-6e6bdfdd7a36 req-eb0c8d47-516b-4ae9-a3d4-ab4367a2fc76 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Refreshing network info cache for port bf41c673-482b-42e3-ac98-475b716fa0e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 09:58:12 compute-1 ceph-mon[80009]: pgmap v940: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 912 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Nov 24 09:58:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:58:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:12.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:58:12 compute-1 nova_compute[230010]: 2025-11-24 09:58:12.829 230014 DEBUG nova.network.neutron [req-09924f42-779e-4c6e-a5fd-6e6bdfdd7a36 req-eb0c8d47-516b-4ae9-a3d4-ab4367a2fc76 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updated VIF entry in instance network info cache for port bf41c673-482b-42e3-ac98-475b716fa0e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 09:58:12 compute-1 nova_compute[230010]: 2025-11-24 09:58:12.829 230014 DEBUG nova.network.neutron [req-09924f42-779e-4c6e-a5fd-6e6bdfdd7a36 req-eb0c8d47-516b-4ae9-a3d4-ab4367a2fc76 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updating instance_info_cache with network_info: [{"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:58:12 compute-1 nova_compute[230010]: 2025-11-24 09:58:12.846 230014 DEBUG oslo_concurrency.lockutils [req-09924f42-779e-4c6e-a5fd-6e6bdfdd7a36 req-eb0c8d47-516b-4ae9-a3d4-ab4367a2fc76 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:58:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:12.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:13 compute-1 nova_compute[230010]: 2025-11-24 09:58:13.592 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:13 compute-1 nova_compute[230010]: 2025-11-24 09:58:13.791 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:58:14 compute-1 ceph-mon[80009]: pgmap v941: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 24 09:58:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:14.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:58:14 compute-1 nova_compute[230010]: 2025-11-24 09:58:14.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:58:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:14.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:58:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:58:15 compute-1 nova_compute[230010]: 2025-11-24 09:58:15.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:58:15 compute-1 nova_compute[230010]: 2025-11-24 09:58:15.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 09:58:15 compute-1 sudo[239620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:58:15 compute-1 sudo[239620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:58:15 compute-1 sudo[239620]: pam_unix(sudo:session): session closed for user root
Nov 24 09:58:15 compute-1 sudo[239645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Nov 24 09:58:15 compute-1 sudo[239645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:58:15 compute-1 nova_compute[230010]: 2025-11-24 09:58:15.953 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:16 compute-1 ceph-mon[80009]: pgmap v942: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 24 09:58:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:58:16 compute-1 sudo[239645]: pam_unix(sudo:session): session closed for user root
Nov 24 09:58:16 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:58:16 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:58:16 compute-1 sudo[239689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:58:16 compute-1 sudo[239689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:58:16 compute-1 sudo[239689]: pam_unix(sudo:session): session closed for user root
Nov 24 09:58:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:58:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:16.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:58:16 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:58:16 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:58:16 compute-1 sudo[239714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:58:16 compute-1 sudo[239714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:58:16 compute-1 sudo[239714]: pam_unix(sudo:session): session closed for user root
Nov 24 09:58:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:16.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:17 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:17 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:17 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:17 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:17 compute-1 podman[239770]: 2025-11-24 09:58:17.353762338 +0000 UTC m=+0.090029772 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:58:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:58:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:18.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:58:18 compute-1 ceph-mon[80009]: pgmap v943: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Nov 24 09:58:18 compute-1 nova_compute[230010]: 2025-11-24 09:58:18.593 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:18 compute-1 nova_compute[230010]: 2025-11-24 09:58:18.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:58:18 compute-1 nova_compute[230010]: 2025-11-24 09:58:18.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:58:18 compute-1 nova_compute[230010]: 2025-11-24 09:58:18.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:58:18 compute-1 nova_compute[230010]: 2025-11-24 09:58:18.783 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:58:18 compute-1 nova_compute[230010]: 2025-11-24 09:58:18.783 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:58:18 compute-1 nova_compute[230010]: 2025-11-24 09:58:18.783 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:58:18 compute-1 nova_compute[230010]: 2025-11-24 09:58:18.784 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:58:18 compute-1 nova_compute[230010]: 2025-11-24 09:58:18.784 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:58:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:58:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:18.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:58:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:58:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:58:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:58:19 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:58:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:58:19 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:58:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:58:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:58:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:58:19 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:58:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:58:19 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:58:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:58:19 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:58:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:58:19 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2462380472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:58:19 compute-1 nova_compute[230010]: 2025-11-24 09:58:19.238 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:58:19 compute-1 nova_compute[230010]: 2025-11-24 09:58:19.301 230014 DEBUG nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 09:58:19 compute-1 nova_compute[230010]: 2025-11-24 09:58:19.302 230014 DEBUG nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 09:58:19 compute-1 nova_compute[230010]: 2025-11-24 09:58:19.435 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:58:19 compute-1 nova_compute[230010]: 2025-11-24 09:58:19.436 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4791MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:58:19 compute-1 nova_compute[230010]: 2025-11-24 09:58:19.436 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:58:19 compute-1 nova_compute[230010]: 2025-11-24 09:58:19.436 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:58:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:58:19 compute-1 ovn_controller[132966]: 2025-11-24T09:58:19Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:99:a7:ce 10.100.0.8
Nov 24 09:58:19 compute-1 ovn_controller[132966]: 2025-11-24T09:58:19Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:99:a7:ce 10.100.0.8
Nov 24 09:58:19 compute-1 nova_compute[230010]: 2025-11-24 09:58:19.499 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Instance 62465e3c-a372-4121-8a2e-5e10d1c3faf6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 09:58:19 compute-1 nova_compute[230010]: 2025-11-24 09:58:19.500 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:58:19 compute-1 nova_compute[230010]: 2025-11-24 09:58:19.500 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:58:19 compute-1 nova_compute[230010]: 2025-11-24 09:58:19.531 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:58:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:58:19 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3874872579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:58:19 compute-1 nova_compute[230010]: 2025-11-24 09:58:19.976 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:58:19 compute-1 nova_compute[230010]: 2025-11-24 09:58:19.981 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:58:19 compute-1 nova_compute[230010]: 2025-11-24 09:58:19.994 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:58:20 compute-1 nova_compute[230010]: 2025-11-24 09:58:20.010 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:58:20 compute-1 nova_compute[230010]: 2025-11-24 09:58:20.010 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:58:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:20.059 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:58:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:20.060 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:58:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:20.061 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:58:20 compute-1 ceph-mon[80009]: pgmap v944: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 35 op/s
Nov 24 09:58:20 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:20 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:20 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/829144257' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:58:20 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:58:20 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:58:20 compute-1 ceph-mon[80009]: pgmap v945: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 41 op/s
Nov 24 09:58:20 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:20 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:20 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:58:20 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:58:20 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:58:20 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2462380472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:58:20 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1162875952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:58:20 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3874872579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:58:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:20.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:58:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:20.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:58:20 compute-1 nova_compute[230010]: 2025-11-24 09:58:20.954 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:21 compute-1 nova_compute[230010]: 2025-11-24 09:58:21.010 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:58:21 compute-1 nova_compute[230010]: 2025-11-24 09:58:21.011 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 09:58:21 compute-1 nova_compute[230010]: 2025-11-24 09:58:21.011 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 09:58:21 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2425649229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:58:21 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2246084627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:58:21 compute-1 nova_compute[230010]: 2025-11-24 09:58:21.177 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:58:21 compute-1 nova_compute[230010]: 2025-11-24 09:58:21.177 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquired lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:58:21 compute-1 nova_compute[230010]: 2025-11-24 09:58:21.178 230014 DEBUG nova.network.neutron [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 09:58:21 compute-1 nova_compute[230010]: 2025-11-24 09:58:21.178 230014 DEBUG nova.objects.instance [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 62465e3c-a372-4121-8a2e-5e10d1c3faf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:58:22 compute-1 ceph-mon[80009]: pgmap v946: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 41 op/s
Nov 24 09:58:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:22.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:22 compute-1 nova_compute[230010]: 2025-11-24 09:58:22.476 230014 DEBUG nova.network.neutron [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updating instance_info_cache with network_info: [{"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:58:22 compute-1 nova_compute[230010]: 2025-11-24 09:58:22.495 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Releasing lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:58:22 compute-1 nova_compute[230010]: 2025-11-24 09:58:22.496 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 09:58:22 compute-1 nova_compute[230010]: 2025-11-24 09:58:22.497 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:58:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:22.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:23 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:58:23 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:58:23 compute-1 sudo[239837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:58:23 compute-1 sudo[239837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:58:23 compute-1 sudo[239837]: pam_unix(sudo:session): session closed for user root
Nov 24 09:58:23 compute-1 nova_compute[230010]: 2025-11-24 09:58:23.594 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:24 compute-1 nova_compute[230010]: 2025-11-24 09:58:24.247 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:58:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:24.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:24 compute-1 ceph-mon[80009]: pgmap v947: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.5 MiB/s wr, 77 op/s
Nov 24 09:58:24 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:24 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:58:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:24.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:25 compute-1 nova_compute[230010]: 2025-11-24 09:58:25.956 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:58:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:26.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:58:26 compute-1 ceph-mon[80009]: pgmap v948: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.5 MiB/s wr, 77 op/s
Nov 24 09:58:26 compute-1 nova_compute[230010]: 2025-11-24 09:58:26.408 230014 INFO nova.compute.manager [None req-c1f4acb6-62ec-48ba-8539-f5a89e8c8956 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Get console output
Nov 24 09:58:26 compute-1 nova_compute[230010]: 2025-11-24 09:58:26.413 236028 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 24 09:58:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:26.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:27 compute-1 sudo[239864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:58:27 compute-1 sudo[239864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:58:27 compute-1 sudo[239864]: pam_unix(sudo:session): session closed for user root
Nov 24 09:58:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:28.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:28 compute-1 ceph-mon[80009]: pgmap v949: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 382 KiB/s rd, 2.5 MiB/s wr, 77 op/s
Nov 24 09:58:28 compute-1 nova_compute[230010]: 2025-11-24 09:58:28.595 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:28.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:58:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:30.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:30 compute-1 ceph-mon[80009]: pgmap v950: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 382 KiB/s rd, 2.5 MiB/s wr, 77 op/s
Nov 24 09:58:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:58:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:58:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:30.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:30 compute-1 nova_compute[230010]: 2025-11-24 09:58:30.994 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:31 compute-1 nova_compute[230010]: 2025-11-24 09:58:31.306 230014 DEBUG oslo_concurrency.lockutils [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "interface-62465e3c-a372-4121-8a2e-5e10d1c3faf6-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:58:31 compute-1 nova_compute[230010]: 2025-11-24 09:58:31.307 230014 DEBUG oslo_concurrency.lockutils [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "interface-62465e3c-a372-4121-8a2e-5e10d1c3faf6-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:58:31 compute-1 nova_compute[230010]: 2025-11-24 09:58:31.307 230014 DEBUG nova.objects.instance [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'flavor' on Instance uuid 62465e3c-a372-4121-8a2e-5e10d1c3faf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:58:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:58:31 compute-1 nova_compute[230010]: 2025-11-24 09:58:31.622 230014 DEBUG nova.objects.instance [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'pci_requests' on Instance uuid 62465e3c-a372-4121-8a2e-5e10d1c3faf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:58:31 compute-1 nova_compute[230010]: 2025-11-24 09:58:31.634 230014 DEBUG nova.network.neutron [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 09:58:31 compute-1 nova_compute[230010]: 2025-11-24 09:58:31.754 230014 DEBUG nova.policy [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '43f79ff3105e4372a3c095e8057d4f1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '94d069fc040647d5a6e54894eec915fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 09:58:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:32.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:32 compute-1 nova_compute[230010]: 2025-11-24 09:58:32.374 230014 DEBUG nova.network.neutron [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Successfully created port: 2ad41fbf-b749-4394-9d14-483c127ff44c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 24 09:58:32 compute-1 ceph-mon[80009]: pgmap v951: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 24 09:58:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:32.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:33 compute-1 nova_compute[230010]: 2025-11-24 09:58:33.568 230014 DEBUG nova.network.neutron [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Successfully updated port: 2ad41fbf-b749-4394-9d14-483c127ff44c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 09:58:33 compute-1 nova_compute[230010]: 2025-11-24 09:58:33.583 230014 DEBUG oslo_concurrency.lockutils [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:58:33 compute-1 nova_compute[230010]: 2025-11-24 09:58:33.584 230014 DEBUG oslo_concurrency.lockutils [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:58:33 compute-1 nova_compute[230010]: 2025-11-24 09:58:33.584 230014 DEBUG nova.network.neutron [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 09:58:33 compute-1 nova_compute[230010]: 2025-11-24 09:58:33.599 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:33 compute-1 nova_compute[230010]: 2025-11-24 09:58:33.686 230014 DEBUG nova.compute.manager [req-a6a3c25e-5102-4c48-9698-a411f3473fbf req-9c51d3c0-2c92-4321-b9c5-04de45187497 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-changed-2ad41fbf-b749-4394-9d14-483c127ff44c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:58:33 compute-1 nova_compute[230010]: 2025-11-24 09:58:33.687 230014 DEBUG nova.compute.manager [req-a6a3c25e-5102-4c48-9698-a411f3473fbf req-9c51d3c0-2c92-4321-b9c5-04de45187497 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Refreshing instance network info cache due to event network-changed-2ad41fbf-b749-4394-9d14-483c127ff44c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 09:58:33 compute-1 nova_compute[230010]: 2025-11-24 09:58:33.687 230014 DEBUG oslo_concurrency.lockutils [req-a6a3c25e-5102-4c48-9698-a411f3473fbf req-9c51d3c0-2c92-4321-b9c5-04de45187497 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:58:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:58:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:34.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:58:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:58:34 compute-1 ceph-mon[80009]: pgmap v952: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 24 09:58:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:34.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:35 compute-1 podman[239894]: 2025-11-24 09:58:35.338863909 +0000 UTC m=+0.080771765 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.559 230014 DEBUG nova.network.neutron [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updating instance_info_cache with network_info: [{"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.575 230014 DEBUG oslo_concurrency.lockutils [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.576 230014 DEBUG oslo_concurrency.lockutils [req-a6a3c25e-5102-4c48-9698-a411f3473fbf req-9c51d3c0-2c92-4321-b9c5-04de45187497 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.576 230014 DEBUG nova.network.neutron [req-a6a3c25e-5102-4c48-9698-a411f3473fbf req-9c51d3c0-2c92-4321-b9c5-04de45187497 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Refreshing network info cache for port 2ad41fbf-b749-4394-9d14-483c127ff44c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.579 230014 DEBUG nova.virt.libvirt.vif [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:57:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1468987490',display_name='tempest-TestNetworkBasicOps-server-1468987490',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1468987490',id=6,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeLeNtgMDECCA396nl5/z6TsnAPH3kX9ECWzaWuLvptXvMaJaj/WlHKUFyFRR30PurvGrDvNN2g1Ij1pTu0Su2H0Am0Z6Y5TdOjAAQXOQr2HISwvDDFzD9t0aaelZEbhw==',key_name='tempest-TestNetworkBasicOps-1307688110',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:58:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-sy1yuug7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:58:05Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=62465e3c-a372-4121-8a2e-5e10d1c3faf6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.579 230014 DEBUG nova.network.os_vif_util [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.580 230014 DEBUG nova.network.os_vif_util [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:72:0f,bridge_name='br-int',has_traffic_filtering=True,id=2ad41fbf-b749-4394-9d14-483c127ff44c,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad41fbf-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.580 230014 DEBUG os_vif [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:72:0f,bridge_name='br-int',has_traffic_filtering=True,id=2ad41fbf-b749-4394-9d14-483c127ff44c,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad41fbf-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.581 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.581 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.581 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.584 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.584 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ad41fbf-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.585 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2ad41fbf-b7, col_values=(('external_ids', {'iface-id': '2ad41fbf-b749-4394-9d14-483c127ff44c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:72:0f', 'vm-uuid': '62465e3c-a372-4121-8a2e-5e10d1c3faf6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.586 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:35 compute-1 NetworkManager[48870]: <info>  [1763978315.5875] manager: (tap2ad41fbf-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.588 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.594 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.595 230014 INFO os_vif [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:72:0f,bridge_name='br-int',has_traffic_filtering=True,id=2ad41fbf-b749-4394-9d14-483c127ff44c,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad41fbf-b7')
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.596 230014 DEBUG nova.virt.libvirt.vif [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:57:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1468987490',display_name='tempest-TestNetworkBasicOps-server-1468987490',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1468987490',id=6,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeLeNtgMDECCA396nl5/z6TsnAPH3kX9ECWzaWuLvptXvMaJaj/WlHKUFyFRR30PurvGrDvNN2g1Ij1pTu0Su2H0Am0Z6Y5TdOjAAQXOQr2HISwvDDFzD9t0aaelZEbhw==',key_name='tempest-TestNetworkBasicOps-1307688110',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:58:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-sy1yuug7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:58:05Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=62465e3c-a372-4121-8a2e-5e10d1c3faf6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.596 230014 DEBUG nova.network.os_vif_util [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.597 230014 DEBUG nova.network.os_vif_util [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:72:0f,bridge_name='br-int',has_traffic_filtering=True,id=2ad41fbf-b749-4394-9d14-483c127ff44c,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad41fbf-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.599 230014 DEBUG nova.virt.libvirt.guest [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] attach device xml: <interface type="ethernet">
Nov 24 09:58:35 compute-1 nova_compute[230010]:   <mac address="fa:16:3e:df:72:0f"/>
Nov 24 09:58:35 compute-1 nova_compute[230010]:   <model type="virtio"/>
Nov 24 09:58:35 compute-1 nova_compute[230010]:   <driver name="vhost" rx_queue_size="512"/>
Nov 24 09:58:35 compute-1 nova_compute[230010]:   <mtu size="1442"/>
Nov 24 09:58:35 compute-1 nova_compute[230010]:   <target dev="tap2ad41fbf-b7"/>
Nov 24 09:58:35 compute-1 nova_compute[230010]: </interface>
Nov 24 09:58:35 compute-1 nova_compute[230010]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 24 09:58:35 compute-1 NetworkManager[48870]: <info>  [1763978315.6087] manager: (tap2ad41fbf-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/54)
Nov 24 09:58:35 compute-1 kernel: tap2ad41fbf-b7: entered promiscuous mode
Nov 24 09:58:35 compute-1 ovn_controller[132966]: 2025-11-24T09:58:35Z|00073|binding|INFO|Claiming lport 2ad41fbf-b749-4394-9d14-483c127ff44c for this chassis.
Nov 24 09:58:35 compute-1 ovn_controller[132966]: 2025-11-24T09:58:35Z|00074|binding|INFO|2ad41fbf-b749-4394-9d14-483c127ff44c: Claiming fa:16:3e:df:72:0f 10.100.0.24
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.611 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.622 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:72:0f 10.100.0.24'], port_security=['fa:16:3e:df:72:0f 10.100.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': '62465e3c-a372-4121-8a2e-5e10d1c3faf6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cbb18554-4df6-4004-8b94-6d2a9b50722d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '33c3a403-57a0-4b88-8817-f12f4bfc92ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58766ea9-d6bf-4e11-9e8a-1652f6f7c4d5, chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=2ad41fbf-b749-4394-9d14-483c127ff44c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.624 142336 INFO neutron.agent.ovn.metadata.agent [-] Port 2ad41fbf-b749-4394-9d14-483c127ff44c in datapath cbb18554-4df6-4004-8b94-6d2a9b50722d bound to our chassis
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.625 142336 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cbb18554-4df6-4004-8b94-6d2a9b50722d
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.636 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[44eea528-b777-4d6f-af78-f4f089df7926]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.636 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcbb18554-41 in ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.638 234803 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcbb18554-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.638 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[a938a700-ce98-4f86-b8ab-3b53d40549b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.639 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[454744c3-9dee-4e17-bd96-afb59c96b927]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:35 compute-1 systemd-udevd[239921]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 09:58:35 compute-1 NetworkManager[48870]: <info>  [1763978315.6519] device (tap2ad41fbf-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 09:58:35 compute-1 NetworkManager[48870]: <info>  [1763978315.6531] device (tap2ad41fbf-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.654 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[a590b0c5-c724-462e-b73f-5ac3f8d2914d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.656 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:35 compute-1 ovn_controller[132966]: 2025-11-24T09:58:35Z|00075|binding|INFO|Setting lport 2ad41fbf-b749-4394-9d14-483c127ff44c ovn-installed in OVS
Nov 24 09:58:35 compute-1 ovn_controller[132966]: 2025-11-24T09:58:35Z|00076|binding|INFO|Setting lport 2ad41fbf-b749-4394-9d14-483c127ff44c up in Southbound
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.659 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.677 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[ad55ad39-9b9d-4e46-9d7f-5c2fce5777cf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.699 230014 DEBUG nova.virt.libvirt.driver [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.700 230014 DEBUG nova.virt.libvirt.driver [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.700 230014 DEBUG nova.virt.libvirt.driver [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:99:a7:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.700 230014 DEBUG nova.virt.libvirt.driver [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:df:72:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.701 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[ac1e685d-0e5c-4656-a172-d1ea46740a4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.706 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[d3782be3-91e5-4c54-b019-bbc4dab256ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:35 compute-1 NetworkManager[48870]: <info>  [1763978315.7080] manager: (tapcbb18554-40): new Veth device (/org/freedesktop/NetworkManager/Devices/55)
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.722 230014 DEBUG nova.virt.libvirt.guest [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 09:58:35 compute-1 nova_compute[230010]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 09:58:35 compute-1 nova_compute[230010]:   <nova:name>tempest-TestNetworkBasicOps-server-1468987490</nova:name>
Nov 24 09:58:35 compute-1 nova_compute[230010]:   <nova:creationTime>2025-11-24 09:58:35</nova:creationTime>
Nov 24 09:58:35 compute-1 nova_compute[230010]:   <nova:flavor name="m1.nano">
Nov 24 09:58:35 compute-1 nova_compute[230010]:     <nova:memory>128</nova:memory>
Nov 24 09:58:35 compute-1 nova_compute[230010]:     <nova:disk>1</nova:disk>
Nov 24 09:58:35 compute-1 nova_compute[230010]:     <nova:swap>0</nova:swap>
Nov 24 09:58:35 compute-1 nova_compute[230010]:     <nova:ephemeral>0</nova:ephemeral>
Nov 24 09:58:35 compute-1 nova_compute[230010]:     <nova:vcpus>1</nova:vcpus>
Nov 24 09:58:35 compute-1 nova_compute[230010]:   </nova:flavor>
Nov 24 09:58:35 compute-1 nova_compute[230010]:   <nova:owner>
Nov 24 09:58:35 compute-1 nova_compute[230010]:     <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 09:58:35 compute-1 nova_compute[230010]:     <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 09:58:35 compute-1 nova_compute[230010]:   </nova:owner>
Nov 24 09:58:35 compute-1 nova_compute[230010]:   <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 09:58:35 compute-1 nova_compute[230010]:   <nova:ports>
Nov 24 09:58:35 compute-1 nova_compute[230010]:     <nova:port uuid="bf41c673-482b-42e3-ac98-475b716fa0e9">
Nov 24 09:58:35 compute-1 nova_compute[230010]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 24 09:58:35 compute-1 nova_compute[230010]:     </nova:port>
Nov 24 09:58:35 compute-1 nova_compute[230010]:     <nova:port uuid="2ad41fbf-b749-4394-9d14-483c127ff44c">
Nov 24 09:58:35 compute-1 nova_compute[230010]:       <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Nov 24 09:58:35 compute-1 nova_compute[230010]:     </nova:port>
Nov 24 09:58:35 compute-1 nova_compute[230010]:   </nova:ports>
Nov 24 09:58:35 compute-1 nova_compute[230010]: </nova:instance>
Nov 24 09:58:35 compute-1 nova_compute[230010]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.733 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b8d3a6-a6fe-4fd6-83ef-23a5ca0c20b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.736 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[0f5c1966-beab-4400-a7b4-e1ccb63221ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.744 230014 DEBUG oslo_concurrency.lockutils [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "interface-62465e3c-a372-4121-8a2e-5e10d1c3faf6-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:58:35 compute-1 NetworkManager[48870]: <info>  [1763978315.7545] device (tapcbb18554-40): carrier: link connected
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.759 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[33e8423c-d1c9-47bc-9f13-034411185da4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.781 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[090d75c6-2dd5-491c-8368-a7b0a78ce59c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcbb18554-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:d4:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425930, 'reachable_time': 25772, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239947, 'error': None, 'target': 'ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.795 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[f29711b4-d554-43f3-87c8-2f5c63b9fa02]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe03:d482'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 425930, 'tstamp': 425930}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239948, 'error': None, 'target': 'ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.810 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[d0347af3-acc5-4183-b49e-738e8df6a4dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcbb18554-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:d4:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425930, 'reachable_time': 25772, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 239949, 'error': None, 'target': 'ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.838 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[9e6089f1-3bb7-4dfd-a081-4af247d0ae34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.867 230014 DEBUG nova.compute.manager [req-ffca368c-851f-4104-8641-98b4647f35a2 req-0400abf1-0913-4831-8c8e-a51887ef1ba0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-vif-plugged-2ad41fbf-b749-4394-9d14-483c127ff44c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.867 230014 DEBUG oslo_concurrency.lockutils [req-ffca368c-851f-4104-8641-98b4647f35a2 req-0400abf1-0913-4831-8c8e-a51887ef1ba0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.867 230014 DEBUG oslo_concurrency.lockutils [req-ffca368c-851f-4104-8641-98b4647f35a2 req-0400abf1-0913-4831-8c8e-a51887ef1ba0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.868 230014 DEBUG oslo_concurrency.lockutils [req-ffca368c-851f-4104-8641-98b4647f35a2 req-0400abf1-0913-4831-8c8e-a51887ef1ba0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.868 230014 DEBUG nova.compute.manager [req-ffca368c-851f-4104-8641-98b4647f35a2 req-0400abf1-0913-4831-8c8e-a51887ef1ba0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] No waiting events found dispatching network-vif-plugged-2ad41fbf-b749-4394-9d14-483c127ff44c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.868 230014 WARNING nova.compute.manager [req-ffca368c-851f-4104-8641-98b4647f35a2 req-0400abf1-0913-4831-8c8e-a51887ef1ba0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received unexpected event network-vif-plugged-2ad41fbf-b749-4394-9d14-483c127ff44c for instance with vm_state active and task_state None.
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.906 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[646460e5-41bd-4a98-8f25-9615db463c8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.908 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcbb18554-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.908 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.908 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcbb18554-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:58:35 compute-1 kernel: tapcbb18554-40: entered promiscuous mode
Nov 24 09:58:35 compute-1 NetworkManager[48870]: <info>  [1763978315.9114] manager: (tapcbb18554-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.911 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.913 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.914 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcbb18554-40, col_values=(('external_ids', {'iface-id': '7477e0b1-7d3c-42ae-9333-aaa2b41f75a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.915 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:35 compute-1 ovn_controller[132966]: 2025-11-24T09:58:35Z|00077|binding|INFO|Releasing lport 7477e0b1-7d3c-42ae-9333-aaa2b41f75a9 from this chassis (sb_readonly=0)
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.930 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.931 142336 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cbb18554-4df6-4004-8b94-6d2a9b50722d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cbb18554-4df6-4004-8b94-6d2a9b50722d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.932 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[211e0df8-ddd9-4c9b-bda8-907289c1b345]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.933 142336 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: global
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]:     log         /dev/log local0 debug
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]:     log-tag     haproxy-metadata-proxy-cbb18554-4df6-4004-8b94-6d2a9b50722d
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]:     user        root
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]:     group       root
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]:     maxconn     1024
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]:     pidfile     /var/lib/neutron/external/pids/cbb18554-4df6-4004-8b94-6d2a9b50722d.pid.haproxy
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]:     daemon
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: defaults
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]:     log global
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]:     mode http
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]:     option httplog
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]:     option dontlognull
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]:     option http-server-close
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]:     option forwardfor
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]:     retries                 3
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]:     timeout http-request    30s
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]:     timeout connect         30s
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]:     timeout client          32s
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]:     timeout server          32s
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]:     timeout http-keep-alive 30s
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: listen listener
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]:     bind 169.254.169.254:80
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]:     server metadata /var/lib/neutron/metadata_proxy
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]:     http-request add-header X-OVN-Network-ID cbb18554-4df6-4004-8b94-6d2a9b50722d
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 24 09:58:35 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.933 142336 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d', 'env', 'PROCESS_TAG=haproxy-cbb18554-4df6-4004-8b94-6d2a9b50722d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cbb18554-4df6-4004-8b94-6d2a9b50722d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 24 09:58:35 compute-1 nova_compute[230010]: 2025-11-24 09:58:35.995 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:36 compute-1 nova_compute[230010]: 2025-11-24 09:58:36.050 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:36 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:36.053 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:58:36 compute-1 podman[239982]: 2025-11-24 09:58:36.271554912 +0000 UTC m=+0.046453318 container create bafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 09:58:36 compute-1 systemd[1]: Started libpod-conmon-bafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b.scope.
Nov 24 09:58:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:36.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:36 compute-1 systemd[1]: Started libcrun container.
Nov 24 09:58:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/447107754eda77794034edf91920a06d35d4d1b91593ad5057e2c61b459718a4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 09:58:36 compute-1 podman[239982]: 2025-11-24 09:58:36.24774428 +0000 UTC m=+0.022642706 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 09:58:36 compute-1 podman[239982]: 2025-11-24 09:58:36.343702815 +0000 UTC m=+0.118601241 container init bafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 24 09:58:36 compute-1 podman[239982]: 2025-11-24 09:58:36.348914383 +0000 UTC m=+0.123812789 container start bafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:58:36 compute-1 neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d[239998]: [NOTICE]   (240002) : New worker (240004) forked
Nov 24 09:58:36 compute-1 neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d[239998]: [NOTICE]   (240002) : Loading success.
Nov 24 09:58:36 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:36.414 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 09:58:36 compute-1 ceph-mon[80009]: pgmap v953: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 16 KiB/s wr, 1 op/s
Nov 24 09:58:36 compute-1 nova_compute[230010]: 2025-11-24 09:58:36.662 230014 DEBUG nova.network.neutron [req-a6a3c25e-5102-4c48-9698-a411f3473fbf req-9c51d3c0-2c92-4321-b9c5-04de45187497 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updated VIF entry in instance network info cache for port 2ad41fbf-b749-4394-9d14-483c127ff44c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 09:58:36 compute-1 nova_compute[230010]: 2025-11-24 09:58:36.663 230014 DEBUG nova.network.neutron [req-a6a3c25e-5102-4c48-9698-a411f3473fbf req-9c51d3c0-2c92-4321-b9c5-04de45187497 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updating instance_info_cache with network_info: [{"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:58:36 compute-1 nova_compute[230010]: 2025-11-24 09:58:36.676 230014 DEBUG oslo_concurrency.lockutils [req-a6a3c25e-5102-4c48-9698-a411f3473fbf req-9c51d3c0-2c92-4321-b9c5-04de45187497 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:58:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:36.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:37 compute-1 ovn_controller[132966]: 2025-11-24T09:58:37Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:df:72:0f 10.100.0.24
Nov 24 09:58:37 compute-1 ovn_controller[132966]: 2025-11-24T09:58:37Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:df:72:0f 10.100.0.24
Nov 24 09:58:37 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:58:37.416 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:58:37 compute-1 ceph-mon[80009]: pgmap v954: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 17 KiB/s wr, 2 op/s
Nov 24 09:58:37 compute-1 nova_compute[230010]: 2025-11-24 09:58:37.962 230014 DEBUG nova.compute.manager [req-96e7c529-9878-4501-9a97-c4dbe0066426 req-58153fdf-b208-4923-ba6d-c5121c6d1d04 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-vif-plugged-2ad41fbf-b749-4394-9d14-483c127ff44c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:58:37 compute-1 nova_compute[230010]: 2025-11-24 09:58:37.963 230014 DEBUG oslo_concurrency.lockutils [req-96e7c529-9878-4501-9a97-c4dbe0066426 req-58153fdf-b208-4923-ba6d-c5121c6d1d04 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:58:37 compute-1 nova_compute[230010]: 2025-11-24 09:58:37.963 230014 DEBUG oslo_concurrency.lockutils [req-96e7c529-9878-4501-9a97-c4dbe0066426 req-58153fdf-b208-4923-ba6d-c5121c6d1d04 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:58:37 compute-1 nova_compute[230010]: 2025-11-24 09:58:37.964 230014 DEBUG oslo_concurrency.lockutils [req-96e7c529-9878-4501-9a97-c4dbe0066426 req-58153fdf-b208-4923-ba6d-c5121c6d1d04 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:58:37 compute-1 nova_compute[230010]: 2025-11-24 09:58:37.964 230014 DEBUG nova.compute.manager [req-96e7c529-9878-4501-9a97-c4dbe0066426 req-58153fdf-b208-4923-ba6d-c5121c6d1d04 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] No waiting events found dispatching network-vif-plugged-2ad41fbf-b749-4394-9d14-483c127ff44c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 09:58:37 compute-1 nova_compute[230010]: 2025-11-24 09:58:37.965 230014 WARNING nova.compute.manager [req-96e7c529-9878-4501-9a97-c4dbe0066426 req-58153fdf-b208-4923-ba6d-c5121c6d1d04 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received unexpected event network-vif-plugged-2ad41fbf-b749-4394-9d14-483c127ff44c for instance with vm_state active and task_state None.
Nov 24 09:58:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:38.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:38.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:58:40 compute-1 ceph-mon[80009]: pgmap v955: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Nov 24 09:58:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:40.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:40 compute-1 podman[240015]: 2025-11-24 09:58:40.367187759 +0000 UTC m=+0.098321224 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 24 09:58:40 compute-1 nova_compute[230010]: 2025-11-24 09:58:40.586 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:58:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:40.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:58:40 compute-1 nova_compute[230010]: 2025-11-24 09:58:40.996 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:42 compute-1 ceph-mon[80009]: pgmap v956: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Nov 24 09:58:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:42.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:42.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:43 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/654886703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:58:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:44.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:44 compute-1 ceph-mon[80009]: pgmap v957: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 5.7 KiB/s wr, 2 op/s
Nov 24 09:58:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:58:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:44.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:58:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:58:45 compute-1 nova_compute[230010]: 2025-11-24 09:58:45.590 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:45 compute-1 nova_compute[230010]: 2025-11-24 09:58:45.998 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:46.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:46 compute-1 ceph-mon[80009]: pgmap v958: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1023 B/s wr, 1 op/s
Nov 24 09:58:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:58:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:58:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:46.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:58:47 compute-1 sudo[240045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:58:47 compute-1 sudo[240045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:58:47 compute-1 sudo[240045]: pam_unix(sudo:session): session closed for user root
Nov 24 09:58:48 compute-1 podman[240071]: 2025-11-24 09:58:48.320232562 +0000 UTC m=+0.059760613 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 24 09:58:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:48.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:48 compute-1 ceph-mon[80009]: pgmap v959: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 24 09:58:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:58:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:48.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:58:49 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1615056899' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:58:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:58:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:50.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:50 compute-1 ceph-mon[80009]: pgmap v960: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 09:58:50 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/4212559010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:58:50 compute-1 nova_compute[230010]: 2025-11-24 09:58:50.642 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:50.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:51 compute-1 nova_compute[230010]: 2025-11-24 09:58:51.000 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:52.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:52 compute-1 ceph-mon[80009]: pgmap v961: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 09:58:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:52.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:58:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:54.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:58:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:58:54 compute-1 ceph-mon[80009]: pgmap v962: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Nov 24 09:58:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:54.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:55 compute-1 nova_compute[230010]: 2025-11-24 09:58:55.646 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:56 compute-1 nova_compute[230010]: 2025-11-24 09:58:56.003 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.003000071s ======
Nov 24 09:58:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:56.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000071s
Nov 24 09:58:56 compute-1 ceph-mon[80009]: pgmap v963: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 24 09:58:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:56.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:57 compute-1 ceph-mon[80009]: pgmap v964: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Nov 24 09:58:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:58.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:58:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:58.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:59:00 compute-1 ceph-mon[80009]: pgmap v965: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 75 op/s
Nov 24 09:59:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:00.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:59:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:59:00 compute-1 nova_compute[230010]: 2025-11-24 09:59:00.649 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:59:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:00.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:59:01 compute-1 nova_compute[230010]: 2025-11-24 09:59:01.005 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:59:02 compute-1 ceph-mon[80009]: pgmap v966: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 75 op/s
Nov 24 09:59:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/4135505958' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:59:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/4135505958' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:59:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:02.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:02.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:04 compute-1 ceph-mon[80009]: pgmap v967: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 76 op/s
Nov 24 09:59:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:59:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:04.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:59:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:59:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:59:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:04.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:59:05 compute-1 nova_compute[230010]: 2025-11-24 09:59:05.689 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:06 compute-1 nova_compute[230010]: 2025-11-24 09:59:06.008 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:06 compute-1 ceph-mon[80009]: pgmap v968: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.4 KiB/s wr, 65 op/s
Nov 24 09:59:06 compute-1 podman[240099]: 2025-11-24 09:59:06.334876973 +0000 UTC m=+0.070706699 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 09:59:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:06.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:06.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:07 compute-1 sudo[240119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:59:07 compute-1 sudo[240119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:59:07 compute-1 sudo[240119]: pam_unix(sudo:session): session closed for user root
Nov 24 09:59:08 compute-1 ceph-mon[80009]: pgmap v969: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Nov 24 09:59:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:08.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:08.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:59:10 compute-1 ceph-mon[80009]: pgmap v970: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 24 09:59:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:10.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:10 compute-1 nova_compute[230010]: 2025-11-24 09:59:10.693 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:10.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:11 compute-1 nova_compute[230010]: 2025-11-24 09:59:11.011 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:11 compute-1 podman[240146]: 2025-11-24 09:59:11.327418179 +0000 UTC m=+0.071297575 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 24 09:59:12 compute-1 ceph-mon[80009]: pgmap v971: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 24 09:59:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:12.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:12.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.670445) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978353670480, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1711, "num_deletes": 255, "total_data_size": 4502186, "memory_usage": 4569200, "flush_reason": "Manual Compaction"}
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978353684976, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 2881479, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28427, "largest_seqno": 30133, "table_properties": {"data_size": 2874413, "index_size": 4073, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14851, "raw_average_key_size": 19, "raw_value_size": 2860098, "raw_average_value_size": 3768, "num_data_blocks": 179, "num_entries": 759, "num_filter_entries": 759, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763978217, "oldest_key_time": 1763978217, "file_creation_time": 1763978353, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 14591 microseconds, and 7200 cpu microseconds.
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.685030) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 2881479 bytes OK
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.685059) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.686645) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.686664) EVENT_LOG_v1 {"time_micros": 1763978353686657, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.686686) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 4494274, prev total WAL file size 4494274, number of live WAL files 2.
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.688234) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373532' seq:0, type:0; will stop at (end)
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(2813KB)], [54(13MB)]
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978353688272, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 17267122, "oldest_snapshot_seqno": -1}
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 6082 keys, 17121643 bytes, temperature: kUnknown
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978353777908, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 17121643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17077642, "index_size": 27699, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15237, "raw_key_size": 154781, "raw_average_key_size": 25, "raw_value_size": 16964653, "raw_average_value_size": 2789, "num_data_blocks": 1134, "num_entries": 6082, "num_filter_entries": 6082, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763978353, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.778159) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 17121643 bytes
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.779287) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.5 rd, 190.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 13.7 +0.0 blob) out(16.3 +0.0 blob), read-write-amplify(11.9) write-amplify(5.9) OK, records in: 6610, records dropped: 528 output_compression: NoCompression
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.779305) EVENT_LOG_v1 {"time_micros": 1763978353779297, "job": 32, "event": "compaction_finished", "compaction_time_micros": 89702, "compaction_time_cpu_micros": 30929, "output_level": 6, "num_output_files": 1, "total_output_size": 17121643, "num_input_records": 6610, "num_output_records": 6082, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978353779974, "job": 32, "event": "table_file_deletion", "file_number": 56}
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978353782599, "job": 32, "event": "table_file_deletion", "file_number": 54}
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.688191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.782705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.782711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.782713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.782714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:59:13 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.782716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:59:14 compute-1 ovn_controller[132966]: 2025-11-24T09:59:14Z|00078|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Nov 24 09:59:14 compute-1 ceph-mon[80009]: pgmap v972: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 24 09:59:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:14.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:59:14 compute-1 nova_compute[230010]: 2025-11-24 09:59:14.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:59:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:14.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:59:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:59:15 compute-1 nova_compute[230010]: 2025-11-24 09:59:15.696 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:15 compute-1 nova_compute[230010]: 2025-11-24 09:59:15.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:59:16 compute-1 nova_compute[230010]: 2025-11-24 09:59:16.015 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:16.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:16 compute-1 ceph-mon[80009]: pgmap v973: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 24 09:59:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:59:16 compute-1 nova_compute[230010]: 2025-11-24 09:59:16.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:59:16 compute-1 nova_compute[230010]: 2025-11-24 09:59:16.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 09:59:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:59:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:16.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:59:17 compute-1 nova_compute[230010]: 2025-11-24 09:59:17.496 230014 DEBUG nova.compute.manager [req-132dc78c-9235-4eab-bb3c-e515d222fdd7 req-afe5557f-32a0-441f-8ee2-7e9f91c6530e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-changed-2ad41fbf-b749-4394-9d14-483c127ff44c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:59:17 compute-1 nova_compute[230010]: 2025-11-24 09:59:17.497 230014 DEBUG nova.compute.manager [req-132dc78c-9235-4eab-bb3c-e515d222fdd7 req-afe5557f-32a0-441f-8ee2-7e9f91c6530e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Refreshing instance network info cache due to event network-changed-2ad41fbf-b749-4394-9d14-483c127ff44c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 09:59:17 compute-1 nova_compute[230010]: 2025-11-24 09:59:17.497 230014 DEBUG oslo_concurrency.lockutils [req-132dc78c-9235-4eab-bb3c-e515d222fdd7 req-afe5557f-32a0-441f-8ee2-7e9f91c6530e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:59:17 compute-1 nova_compute[230010]: 2025-11-24 09:59:17.497 230014 DEBUG oslo_concurrency.lockutils [req-132dc78c-9235-4eab-bb3c-e515d222fdd7 req-afe5557f-32a0-441f-8ee2-7e9f91c6530e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:59:17 compute-1 nova_compute[230010]: 2025-11-24 09:59:17.498 230014 DEBUG nova.network.neutron [req-132dc78c-9235-4eab-bb3c-e515d222fdd7 req-afe5557f-32a0-441f-8ee2-7e9f91c6530e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Refreshing network info cache for port 2ad41fbf-b749-4394-9d14-483c127ff44c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 09:59:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:59:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:18.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:59:18 compute-1 ceph-mon[80009]: pgmap v974: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 24 09:59:18 compute-1 nova_compute[230010]: 2025-11-24 09:59:18.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:59:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:18.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:19 compute-1 podman[240177]: 2025-11-24 09:59:19.303341801 +0000 UTC m=+0.047236866 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 24 09:59:19 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/4007411981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:59:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:59:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:59:20.061 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:59:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:59:20.062 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:59:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:59:20.062 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:59:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:59:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:20.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:59:20 compute-1 nova_compute[230010]: 2025-11-24 09:59:20.375 230014 DEBUG nova.network.neutron [req-132dc78c-9235-4eab-bb3c-e515d222fdd7 req-afe5557f-32a0-441f-8ee2-7e9f91c6530e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updated VIF entry in instance network info cache for port 2ad41fbf-b749-4394-9d14-483c127ff44c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 09:59:20 compute-1 nova_compute[230010]: 2025-11-24 09:59:20.375 230014 DEBUG nova.network.neutron [req-132dc78c-9235-4eab-bb3c-e515d222fdd7 req-afe5557f-32a0-441f-8ee2-7e9f91c6530e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updating instance_info_cache with network_info: [{"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:59:20 compute-1 nova_compute[230010]: 2025-11-24 09:59:20.400 230014 DEBUG oslo_concurrency.lockutils [req-132dc78c-9235-4eab-bb3c-e515d222fdd7 req-afe5557f-32a0-441f-8ee2-7e9f91c6530e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:59:20 compute-1 ceph-mon[80009]: pgmap v975: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 18 KiB/s wr, 2 op/s
Nov 24 09:59:20 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3390432427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:59:20 compute-1 nova_compute[230010]: 2025-11-24 09:59:20.698 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:20 compute-1 nova_compute[230010]: 2025-11-24 09:59:20.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:59:20 compute-1 nova_compute[230010]: 2025-11-24 09:59:20.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:59:20 compute-1 nova_compute[230010]: 2025-11-24 09:59:20.782 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:59:20 compute-1 nova_compute[230010]: 2025-11-24 09:59:20.782 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:59:20 compute-1 nova_compute[230010]: 2025-11-24 09:59:20.782 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:59:20 compute-1 nova_compute[230010]: 2025-11-24 09:59:20.783 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:59:20 compute-1 nova_compute[230010]: 2025-11-24 09:59:20.783 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:59:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:20.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:21 compute-1 nova_compute[230010]: 2025-11-24 09:59:21.017 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:21 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:59:21 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3070747134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:59:21 compute-1 nova_compute[230010]: 2025-11-24 09:59:21.243 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:59:21 compute-1 nova_compute[230010]: 2025-11-24 09:59:21.302 230014 DEBUG nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 09:59:21 compute-1 nova_compute[230010]: 2025-11-24 09:59:21.302 230014 DEBUG nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 09:59:21 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3070747134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:59:21 compute-1 nova_compute[230010]: 2025-11-24 09:59:21.514 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:59:21 compute-1 nova_compute[230010]: 2025-11-24 09:59:21.515 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4759MB free_disk=59.89700698852539GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:59:21 compute-1 nova_compute[230010]: 2025-11-24 09:59:21.516 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:59:21 compute-1 nova_compute[230010]: 2025-11-24 09:59:21.516 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:59:21 compute-1 nova_compute[230010]: 2025-11-24 09:59:21.575 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Instance 62465e3c-a372-4121-8a2e-5e10d1c3faf6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 09:59:21 compute-1 nova_compute[230010]: 2025-11-24 09:59:21.576 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:59:21 compute-1 nova_compute[230010]: 2025-11-24 09:59:21.576 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:59:21 compute-1 nova_compute[230010]: 2025-11-24 09:59:21.593 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing inventories for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 09:59:21 compute-1 nova_compute[230010]: 2025-11-24 09:59:21.625 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating ProviderTree inventory for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 09:59:21 compute-1 nova_compute[230010]: 2025-11-24 09:59:21.626 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating inventory in ProviderTree for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 09:59:21 compute-1 nova_compute[230010]: 2025-11-24 09:59:21.641 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing aggregate associations for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 09:59:21 compute-1 nova_compute[230010]: 2025-11-24 09:59:21.661 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing trait associations for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e, traits: COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_F16C,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI2,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 09:59:21 compute-1 nova_compute[230010]: 2025-11-24 09:59:21.694 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:59:22 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:59:22 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2311081532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:59:22 compute-1 nova_compute[230010]: 2025-11-24 09:59:22.141 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:59:22 compute-1 nova_compute[230010]: 2025-11-24 09:59:22.149 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:59:22 compute-1 nova_compute[230010]: 2025-11-24 09:59:22.167 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:59:22 compute-1 nova_compute[230010]: 2025-11-24 09:59:22.169 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:59:22 compute-1 nova_compute[230010]: 2025-11-24 09:59:22.169 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:59:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:22.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:22 compute-1 ceph-mon[80009]: pgmap v976: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 18 KiB/s wr, 2 op/s
Nov 24 09:59:22 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/707340900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:59:22 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2311081532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:59:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:22.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:23 compute-1 nova_compute[230010]: 2025-11-24 09:59:23.170 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:59:23 compute-1 nova_compute[230010]: 2025-11-24 09:59:23.171 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 09:59:23 compute-1 nova_compute[230010]: 2025-11-24 09:59:23.171 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 09:59:23 compute-1 nova_compute[230010]: 2025-11-24 09:59:23.334 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:59:23 compute-1 nova_compute[230010]: 2025-11-24 09:59:23.335 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquired lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:59:23 compute-1 nova_compute[230010]: 2025-11-24 09:59:23.335 230014 DEBUG nova.network.neutron [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 09:59:23 compute-1 nova_compute[230010]: 2025-11-24 09:59:23.335 230014 DEBUG nova.objects.instance [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 62465e3c-a372-4121-8a2e-5e10d1c3faf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:59:23 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1334978600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:59:23 compute-1 sudo[240244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:59:23 compute-1 sudo[240244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:59:23 compute-1 sudo[240244]: pam_unix(sudo:session): session closed for user root
Nov 24 09:59:23 compute-1 sudo[240269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:59:23 compute-1 sudo[240269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:59:24 compute-1 sudo[240269]: pam_unix(sudo:session): session closed for user root
Nov 24 09:59:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:24.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:59:24 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:59:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:59:24 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:59:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:59:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:59:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:59:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:59:24 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:59:24 compute-1 ceph-mon[80009]: pgmap v977: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 27 KiB/s wr, 3 op/s
Nov 24 09:59:24 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:59:24 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:59:24 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:59:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:59:24 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:59:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:59:24 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:59:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:24.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:25 compute-1 ceph-mon[80009]: pgmap v978: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 15 KiB/s wr, 2 op/s
Nov 24 09:59:25 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:59:25 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:59:25 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:59:25 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:59:25 compute-1 nova_compute[230010]: 2025-11-24 09:59:25.701 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:26 compute-1 nova_compute[230010]: 2025-11-24 09:59:26.020 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:26.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:26.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:27 compute-1 sudo[240326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:59:27 compute-1 sudo[240326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:59:27 compute-1 sudo[240326]: pam_unix(sudo:session): session closed for user root
Nov 24 09:59:27 compute-1 ceph-mon[80009]: pgmap v979: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 15 KiB/s wr, 2 op/s
Nov 24 09:59:28 compute-1 nova_compute[230010]: 2025-11-24 09:59:28.362 230014 DEBUG nova.network.neutron [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updating instance_info_cache with network_info: [{"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:59:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.003000071s ======
Nov 24 09:59:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:28.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000071s
Nov 24 09:59:28 compute-1 nova_compute[230010]: 2025-11-24 09:59:28.385 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Releasing lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:59:28 compute-1 nova_compute[230010]: 2025-11-24 09:59:28.386 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 09:59:28 compute-1 nova_compute[230010]: 2025-11-24 09:59:28.386 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:59:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:28.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:28 compute-1 nova_compute[230010]: 2025-11-24 09:59:28.975 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:59:28 compute-1 nova_compute[230010]: 2025-11-24 09:59:28.975 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:59:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:59:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:59:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:59:29 compute-1 sudo[240352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:59:29 compute-1 sudo[240352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:59:29 compute-1 sudo[240352]: pam_unix(sudo:session): session closed for user root
Nov 24 09:59:29 compute-1 ceph-mon[80009]: pgmap v980: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 13 KiB/s wr, 2 op/s
Nov 24 09:59:29 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:59:29 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:59:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:30.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:59:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:59:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:59:30 compute-1 nova_compute[230010]: 2025-11-24 09:59:30.707 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:30.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:31 compute-1 nova_compute[230010]: 2025-11-24 09:59:31.023 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:31 compute-1 ceph-mon[80009]: pgmap v981: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 13 KiB/s wr, 2 op/s
Nov 24 09:59:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:32.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:32 compute-1 ceph-mon[80009]: pgmap v982: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 15 KiB/s wr, 3 op/s
Nov 24 09:59:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:32.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:59:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:34.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:59:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:59:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:59:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:34.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:59:35 compute-1 ceph-mon[80009]: pgmap v983: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Nov 24 09:59:35 compute-1 nova_compute[230010]: 2025-11-24 09:59:35.710 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:36 compute-1 nova_compute[230010]: 2025-11-24 09:59:36.026 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:36.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:59:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:36.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:59:37 compute-1 podman[240382]: 2025-11-24 09:59:37.337869651 +0000 UTC m=+0.073577000 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:59:37 compute-1 ceph-mon[80009]: pgmap v984: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 5.3 KiB/s wr, 1 op/s
Nov 24 09:59:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:38.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:38 compute-1 ceph-mon[80009]: pgmap v985: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 9.7 KiB/s wr, 3 op/s
Nov 24 09:59:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:38.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:59:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:59:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:40.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:59:40 compute-1 nova_compute[230010]: 2025-11-24 09:59:40.715 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:59:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:40.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:59:41 compute-1 nova_compute[230010]: 2025-11-24 09:59:41.028 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:41 compute-1 ceph-mon[80009]: pgmap v986: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 6.3 KiB/s wr, 2 op/s
Nov 24 09:59:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:42.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:42 compute-1 podman[240405]: 2025-11-24 09:59:42.409026059 +0000 UTC m=+0.140635230 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:59:42 compute-1 ceph-mon[80009]: pgmap v987: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 9.7 KiB/s wr, 3 op/s
Nov 24 09:59:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:59:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:42.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:59:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:44.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:59:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:44.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 09:59:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:59:45 compute-1 ceph-mon[80009]: pgmap v988: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 7.7 KiB/s wr, 2 op/s
Nov 24 09:59:45 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:59:45 compute-1 nova_compute[230010]: 2025-11-24 09:59:45.757 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:46 compute-1 nova_compute[230010]: 2025-11-24 09:59:46.030 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:46.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:46 compute-1 ovn_controller[132966]: 2025-11-24T09:59:46Z|00079|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Nov 24 09:59:46 compute-1 ceph-mon[80009]: pgmap v989: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 9.0 KiB/s wr, 3 op/s
Nov 24 09:59:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:59:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:46.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:59:47 compute-1 sudo[240434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:59:47 compute-1 sudo[240434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:59:47 compute-1 sudo[240434]: pam_unix(sudo:session): session closed for user root
Nov 24 09:59:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:59:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:48.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:59:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:59:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:48.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:59:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:59:49 compute-1 ceph-mon[80009]: pgmap v990: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 9.0 KiB/s wr, 3 op/s
Nov 24 09:59:50 compute-1 podman[240461]: 2025-11-24 09:59:50.317522132 +0000 UTC m=+0.057137028 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 24 09:59:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:50.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:50 compute-1 ceph-mon[80009]: pgmap v991: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.7 KiB/s wr, 1 op/s
Nov 24 09:59:50 compute-1 nova_compute[230010]: 2025-11-24 09:59:50.799 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:50.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:51 compute-1 nova_compute[230010]: 2025-11-24 09:59:51.033 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:59:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:52.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:59:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:52.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:53 compute-1 ceph-mon[80009]: pgmap v992: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 6.0 KiB/s wr, 2 op/s
Nov 24 09:59:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:54.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:59:54 compute-1 ceph-mon[80009]: pgmap v993: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 2.7 KiB/s wr, 1 op/s
Nov 24 09:59:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:54.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:55 compute-1 nova_compute[230010]: 2025-11-24 09:59:55.813 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:56 compute-1 nova_compute[230010]: 2025-11-24 09:59:56.037 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:59:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:56.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:59:56 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:59:56.737 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:59:56 compute-1 nova_compute[230010]: 2025-11-24 09:59:56.738 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:56 compute-1 ovn_metadata_agent[142331]: 2025-11-24 09:59:56.739 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 09:59:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:59:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:57.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:59:57 compute-1 ceph-mon[80009]: pgmap v994: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 3.4 KiB/s wr, 9 op/s
Nov 24 09:59:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:58.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:58 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1125735957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:59:58 compute-1 ceph-mon[80009]: pgmap v995: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 2.1 KiB/s wr, 8 op/s
Nov 24 09:59:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 09:59:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:59.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.678 230014 DEBUG oslo_concurrency.lockutils [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "interface-62465e3c-a372-4121-8a2e-5e10d1c3faf6-2ad41fbf-b749-4394-9d14-483c127ff44c" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.679 230014 DEBUG oslo_concurrency.lockutils [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "interface-62465e3c-a372-4121-8a2e-5e10d1c3faf6-2ad41fbf-b749-4394-9d14-483c127ff44c" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.690 230014 DEBUG nova.objects.instance [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'flavor' on Instance uuid 62465e3c-a372-4121-8a2e-5e10d1c3faf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.711 230014 DEBUG nova.virt.libvirt.vif [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:57:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1468987490',display_name='tempest-TestNetworkBasicOps-server-1468987490',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1468987490',id=6,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeLeNtgMDECCA396nl5/z6TsnAPH3kX9ECWzaWuLvptXvMaJaj/WlHKUFyFRR30PurvGrDvNN2g1Ij1pTu0Su2H0Am0Z6Y5TdOjAAQXOQr2HISwvDDFzD9t0aaelZEbhw==',key_name='tempest-TestNetworkBasicOps-1307688110',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:58:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-sy1yuug7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:58:05Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=62465e3c-a372-4121-8a2e-5e10d1c3faf6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.712 230014 DEBUG nova.network.os_vif_util [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.713 230014 DEBUG nova.network.os_vif_util [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:df:72:0f,bridge_name='br-int',has_traffic_filtering=True,id=2ad41fbf-b749-4394-9d14-483c127ff44c,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad41fbf-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.718 230014 DEBUG nova.virt.libvirt.guest [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:df:72:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2ad41fbf-b7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.721 230014 DEBUG nova.virt.libvirt.guest [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:df:72:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2ad41fbf-b7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.724 230014 DEBUG nova.virt.libvirt.driver [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Attempting to detach device tap2ad41fbf-b7 from instance 62465e3c-a372-4121-8a2e-5e10d1c3faf6 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.724 230014 DEBUG nova.virt.libvirt.guest [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] detach device xml: <interface type="ethernet">
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <mac address="fa:16:3e:df:72:0f"/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <model type="virtio"/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <driver name="vhost" rx_queue_size="512"/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <mtu size="1442"/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <target dev="tap2ad41fbf-b7"/>
Nov 24 09:59:59 compute-1 nova_compute[230010]: </interface>
Nov 24 09:59:59 compute-1 nova_compute[230010]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.730 230014 DEBUG nova.virt.libvirt.guest [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:df:72:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2ad41fbf-b7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.733 230014 DEBUG nova.virt.libvirt.guest [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:df:72:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2ad41fbf-b7"/></interface>not found in domain: <domain type='kvm' id='4'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <name>instance-00000006</name>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <uuid>62465e3c-a372-4121-8a2e-5e10d1c3faf6</uuid>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <metadata>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <nova:name>tempest-TestNetworkBasicOps-server-1468987490</nova:name>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <nova:creationTime>2025-11-24 09:58:35</nova:creationTime>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <nova:flavor name="m1.nano">
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:memory>128</nova:memory>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:disk>1</nova:disk>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:swap>0</nova:swap>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:ephemeral>0</nova:ephemeral>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:vcpus>1</nova:vcpus>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </nova:flavor>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <nova:owner>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </nova:owner>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <nova:ports>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:port uuid="bf41c673-482b-42e3-ac98-475b716fa0e9">
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </nova:port>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:port uuid="2ad41fbf-b749-4394-9d14-483c127ff44c">
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </nova:port>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </nova:ports>
Nov 24 09:59:59 compute-1 nova_compute[230010]: </nova:instance>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </metadata>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <memory unit='KiB'>131072</memory>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <vcpu placement='static'>1</vcpu>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <resource>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <partition>/machine</partition>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </resource>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <sysinfo type='smbios'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <system>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <entry name='manufacturer'>RDO</entry>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <entry name='product'>OpenStack Compute</entry>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <entry name='serial'>62465e3c-a372-4121-8a2e-5e10d1c3faf6</entry>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <entry name='uuid'>62465e3c-a372-4121-8a2e-5e10d1c3faf6</entry>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <entry name='family'>Virtual Machine</entry>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </system>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </sysinfo>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <os>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <boot dev='hd'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <smbios mode='sysinfo'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </os>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <features>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <acpi/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <apic/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <vmcoreinfo state='on'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </features>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <cpu mode='custom' match='exact' check='full'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <vendor>AMD</vendor>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='x2apic'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='tsc-deadline'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='hypervisor'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='tsc_adjust'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='spec-ctrl'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='stibp'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='ssbd'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='cmp_legacy'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='overflow-recov'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='succor'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='ibrs'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='amd-ssbd'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='virt-ssbd'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='disable' name='lbrv'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='disable' name='tsc-scale'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='disable' name='vmcb-clean'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='disable' name='flushbyasid'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='disable' name='pause-filter'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='disable' name='pfthreshold'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='disable' name='xsaves'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='disable' name='svm'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='topoext'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='disable' name='npt'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='disable' name='nrip-save'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </cpu>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <clock offset='utc'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <timer name='pit' tickpolicy='delay'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <timer name='hpet' present='no'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </clock>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <on_poweroff>destroy</on_poweroff>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <on_reboot>restart</on_reboot>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <on_crash>destroy</on_crash>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <devices>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <disk type='network' device='disk'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <driver name='qemu' type='raw' cache='none'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <auth username='openstack'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:         <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       </auth>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <source protocol='rbd' name='vms/62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk' index='2'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:         <host name='192.168.122.100' port='6789'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:         <host name='192.168.122.102' port='6789'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:         <host name='192.168.122.101' port='6789'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       </source>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target dev='vda' bus='virtio'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='virtio-disk0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </disk>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <disk type='network' device='cdrom'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <driver name='qemu' type='raw' cache='none'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <auth username='openstack'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:         <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       </auth>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <source protocol='rbd' name='vms/62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk.config' index='1'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:         <host name='192.168.122.100' port='6789'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:         <host name='192.168.122.102' port='6789'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:         <host name='192.168.122.101' port='6789'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       </source>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target dev='sda' bus='sata'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <readonly/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='sata0-0-0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </disk>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='0' model='pcie-root'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pcie.0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='1' port='0x10'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.1'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='2' port='0x11'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.2'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='3' port='0x12'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.3'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='4' port='0x13'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.4'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='5' port='0x14'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.5'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='6' port='0x15'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.6'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='7' port='0x16'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.7'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='8' port='0x17'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.8'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='9' port='0x18'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.9'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='10' port='0x19'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.10'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='11' port='0x1a'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.11'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='12' port='0x1b'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.12'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='13' port='0x1c'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.13'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='14' port='0x1d'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.14'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='15' port='0x1e'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.15'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='16' port='0x1f'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.16'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='17' port='0x20'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.17'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='18' port='0x21'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.18'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='19' port='0x22'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.19'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='20' port='0x23'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.20'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='21' port='0x24'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.21'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='22' port='0x25'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.22'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='23' port='0x26'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.23'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='24' port='0x27'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.24'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='25' port='0x28'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.25'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-pci-bridge'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.26'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='usb'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='sata' index='0'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='ide'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <interface type='ethernet'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <mac address='fa:16:3e:99:a7:ce'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target dev='tapbf41c673-48'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model type='virtio'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <driver name='vhost' rx_queue_size='512'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <mtu size='1442'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='net0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </interface>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <interface type='ethernet'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <mac address='fa:16:3e:df:72:0f'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target dev='tap2ad41fbf-b7'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model type='virtio'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <driver name='vhost' rx_queue_size='512'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <mtu size='1442'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='net1'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </interface>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <serial type='pty'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <source path='/dev/pts/0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <log file='/var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/console.log' append='off'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target type='isa-serial' port='0'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:         <model name='isa-serial'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       </target>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='serial0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </serial>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <console type='pty' tty='/dev/pts/0'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <source path='/dev/pts/0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <log file='/var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/console.log' append='off'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target type='serial' port='0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='serial0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </console>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <input type='tablet' bus='usb'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='input0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='usb' bus='0' port='1'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </input>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <input type='mouse' bus='ps2'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='input1'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </input>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <input type='keyboard' bus='ps2'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='input2'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </input>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <listen type='address' address='::0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </graphics>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <audio id='1' type='none'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <video>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model type='virtio' heads='1' primary='yes'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='video0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </video>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <watchdog model='itco' action='reset'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='watchdog0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </watchdog>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <memballoon model='virtio'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <stats period='10'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='balloon0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </memballoon>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <rng model='virtio'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <backend model='random'>/dev/urandom</backend>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='rng0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </rng>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </devices>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <label>system_u:system_r:svirt_t:s0:c516,c926</label>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c516,c926</imagelabel>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </seclabel>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <label>+107:+107</label>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <imagelabel>+107:+107</imagelabel>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </seclabel>
Nov 24 09:59:59 compute-1 nova_compute[230010]: </domain>
Nov 24 09:59:59 compute-1 nova_compute[230010]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.735 230014 INFO nova.virt.libvirt.driver [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully detached device tap2ad41fbf-b7 from instance 62465e3c-a372-4121-8a2e-5e10d1c3faf6 from the persistent domain config.
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.736 230014 DEBUG nova.virt.libvirt.driver [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] (1/8): Attempting to detach device tap2ad41fbf-b7 with device alias net1 from instance 62465e3c-a372-4121-8a2e-5e10d1c3faf6 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.737 230014 DEBUG nova.virt.libvirt.guest [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] detach device xml: <interface type="ethernet">
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <mac address="fa:16:3e:df:72:0f"/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <model type="virtio"/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <driver name="vhost" rx_queue_size="512"/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <mtu size="1442"/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <target dev="tap2ad41fbf-b7"/>
Nov 24 09:59:59 compute-1 nova_compute[230010]: </interface>
Nov 24 09:59:59 compute-1 nova_compute[230010]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 24 09:59:59 compute-1 kernel: tap2ad41fbf-b7 (unregistering): left promiscuous mode
Nov 24 09:59:59 compute-1 NetworkManager[48870]: <info>  [1763978399.7959] device (tap2ad41fbf-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 09:59:59 compute-1 ovn_controller[132966]: 2025-11-24T09:59:59Z|00080|binding|INFO|Releasing lport 2ad41fbf-b749-4394-9d14-483c127ff44c from this chassis (sb_readonly=0)
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.803 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:59 compute-1 ovn_controller[132966]: 2025-11-24T09:59:59Z|00081|binding|INFO|Setting lport 2ad41fbf-b749-4394-9d14-483c127ff44c down in Southbound
Nov 24 09:59:59 compute-1 ovn_controller[132966]: 2025-11-24T09:59:59Z|00082|binding|INFO|Removing iface tap2ad41fbf-b7 ovn-installed in OVS
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.805 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.814 230014 DEBUG nova.virt.libvirt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Received event <DeviceRemovedEvent: 1763978399.813985, 62465e3c-a372-4121-8a2e-5e10d1c3faf6 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.817 230014 DEBUG nova.virt.libvirt.driver [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Start waiting for the detach event from libvirt for device tap2ad41fbf-b7 with device alias net1 for instance 62465e3c-a372-4121-8a2e-5e10d1c3faf6 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.817 230014 DEBUG nova.virt.libvirt.guest [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:df:72:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2ad41fbf-b7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.821 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.822 230014 DEBUG nova.virt.libvirt.guest [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:df:72:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2ad41fbf-b7"/></interface>not found in domain: <domain type='kvm' id='4'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <name>instance-00000006</name>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <uuid>62465e3c-a372-4121-8a2e-5e10d1c3faf6</uuid>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <metadata>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <nova:name>tempest-TestNetworkBasicOps-server-1468987490</nova:name>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <nova:creationTime>2025-11-24 09:58:35</nova:creationTime>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <nova:flavor name="m1.nano">
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:memory>128</nova:memory>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:disk>1</nova:disk>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:swap>0</nova:swap>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:ephemeral>0</nova:ephemeral>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:vcpus>1</nova:vcpus>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </nova:flavor>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <nova:owner>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </nova:owner>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <nova:ports>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:port uuid="bf41c673-482b-42e3-ac98-475b716fa0e9">
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </nova:port>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:port uuid="2ad41fbf-b749-4394-9d14-483c127ff44c">
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </nova:port>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </nova:ports>
Nov 24 09:59:59 compute-1 nova_compute[230010]: </nova:instance>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </metadata>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <memory unit='KiB'>131072</memory>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <vcpu placement='static'>1</vcpu>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <resource>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <partition>/machine</partition>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </resource>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <sysinfo type='smbios'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <system>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <entry name='manufacturer'>RDO</entry>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <entry name='product'>OpenStack Compute</entry>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <entry name='serial'>62465e3c-a372-4121-8a2e-5e10d1c3faf6</entry>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <entry name='uuid'>62465e3c-a372-4121-8a2e-5e10d1c3faf6</entry>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <entry name='family'>Virtual Machine</entry>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </system>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </sysinfo>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <os>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <boot dev='hd'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <smbios mode='sysinfo'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </os>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <features>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <acpi/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <apic/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <vmcoreinfo state='on'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </features>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <cpu mode='custom' match='exact' check='full'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <vendor>AMD</vendor>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='x2apic'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='tsc-deadline'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='hypervisor'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='tsc_adjust'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='spec-ctrl'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='stibp'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='ssbd'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='cmp_legacy'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='overflow-recov'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='succor'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='ibrs'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='amd-ssbd'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='virt-ssbd'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='disable' name='lbrv'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='disable' name='tsc-scale'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='disable' name='vmcb-clean'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='disable' name='flushbyasid'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='disable' name='pause-filter'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='disable' name='pfthreshold'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='disable' name='xsaves'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='disable' name='svm'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='require' name='topoext'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='disable' name='npt'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <feature policy='disable' name='nrip-save'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </cpu>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <clock offset='utc'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <timer name='pit' tickpolicy='delay'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <timer name='hpet' present='no'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </clock>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <on_poweroff>destroy</on_poweroff>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <on_reboot>restart</on_reboot>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <on_crash>destroy</on_crash>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <devices>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <disk type='network' device='disk'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <driver name='qemu' type='raw' cache='none'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <auth username='openstack'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:         <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       </auth>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <source protocol='rbd' name='vms/62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk' index='2'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:         <host name='192.168.122.100' port='6789'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:         <host name='192.168.122.102' port='6789'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:         <host name='192.168.122.101' port='6789'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       </source>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target dev='vda' bus='virtio'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='virtio-disk0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </disk>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <disk type='network' device='cdrom'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <driver name='qemu' type='raw' cache='none'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <auth username='openstack'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:         <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       </auth>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <source protocol='rbd' name='vms/62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk.config' index='1'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:         <host name='192.168.122.100' port='6789'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:         <host name='192.168.122.102' port='6789'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:         <host name='192.168.122.101' port='6789'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       </source>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target dev='sda' bus='sata'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <readonly/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='sata0-0-0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </disk>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='0' model='pcie-root'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pcie.0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='1' port='0x10'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.1'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='2' port='0x11'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.2'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='3' port='0x12'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.3'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='4' port='0x13'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.4'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='5' port='0x14'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.5'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='6' port='0x15'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.6'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='7' port='0x16'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.7'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='8' port='0x17'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.8'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='9' port='0x18'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.9'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='10' port='0x19'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.10'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='11' port='0x1a'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.11'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='12' port='0x1b'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.12'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='13' port='0x1c'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.13'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='14' port='0x1d'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.14'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='15' port='0x1e'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.15'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='16' port='0x1f'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.16'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='17' port='0x20'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.17'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='18' port='0x21'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.18'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='19' port='0x22'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.19'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='20' port='0x23'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.20'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='21' port='0x24'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.21'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='22' port='0x25'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.22'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='23' port='0x26'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.23'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='24' port='0x27'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.24'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target chassis='25' port='0x28'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.25'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model name='pcie-pci-bridge'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='pci.26'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='usb'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <controller type='sata' index='0'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='ide'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </controller>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <interface type='ethernet'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <mac address='fa:16:3e:99:a7:ce'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target dev='tapbf41c673-48'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model type='virtio'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <driver name='vhost' rx_queue_size='512'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <mtu size='1442'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='net0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </interface>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <serial type='pty'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <source path='/dev/pts/0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <log file='/var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/console.log' append='off'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target type='isa-serial' port='0'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:         <model name='isa-serial'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       </target>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='serial0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </serial>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <console type='pty' tty='/dev/pts/0'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <source path='/dev/pts/0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <log file='/var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/console.log' append='off'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <target type='serial' port='0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='serial0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </console>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <input type='tablet' bus='usb'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='input0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='usb' bus='0' port='1'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </input>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <input type='mouse' bus='ps2'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='input1'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </input>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <input type='keyboard' bus='ps2'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='input2'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </input>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <listen type='address' address='::0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </graphics>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <audio id='1' type='none'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <video>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <model type='virtio' heads='1' primary='yes'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='video0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </video>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <watchdog model='itco' action='reset'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='watchdog0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </watchdog>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <memballoon model='virtio'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <stats period='10'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='balloon0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </memballoon>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <rng model='virtio'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <backend model='random'>/dev/urandom</backend>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <alias name='rng0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </rng>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </devices>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <label>system_u:system_r:svirt_t:s0:c516,c926</label>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c516,c926</imagelabel>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </seclabel>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <label>+107:+107</label>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <imagelabel>+107:+107</imagelabel>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </seclabel>
Nov 24 09:59:59 compute-1 nova_compute[230010]: </domain>
Nov 24 09:59:59 compute-1 nova_compute[230010]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.822 230014 INFO nova.virt.libvirt.driver [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully detached device tap2ad41fbf-b7 from instance 62465e3c-a372-4121-8a2e-5e10d1c3faf6 from the live domain config.
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.823 230014 DEBUG nova.virt.libvirt.vif [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:57:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1468987490',display_name='tempest-TestNetworkBasicOps-server-1468987490',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1468987490',id=6,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeLeNtgMDECCA396nl5/z6TsnAPH3kX9ECWzaWuLvptXvMaJaj/WlHKUFyFRR30PurvGrDvNN2g1Ij1pTu0Su2H0Am0Z6Y5TdOjAAQXOQr2HISwvDDFzD9t0aaelZEbhw==',key_name='tempest-TestNetworkBasicOps-1307688110',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:58:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-sy1yuug7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:58:05Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=62465e3c-a372-4121-8a2e-5e10d1c3faf6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.823 230014 DEBUG nova.network.os_vif_util [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.824 230014 DEBUG nova.network.os_vif_util [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:df:72:0f,bridge_name='br-int',has_traffic_filtering=True,id=2ad41fbf-b749-4394-9d14-483c127ff44c,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad41fbf-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.824 230014 DEBUG os_vif [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:72:0f,bridge_name='br-int',has_traffic_filtering=True,id=2ad41fbf-b749-4394-9d14-483c127ff44c,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad41fbf-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.827 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.827 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ad41fbf-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.828 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.830 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.831 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.833 230014 INFO os_vif [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:72:0f,bridge_name='br-int',has_traffic_filtering=True,id=2ad41fbf-b749-4394-9d14-483c127ff44c,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad41fbf-b7')
Nov 24 09:59:59 compute-1 nova_compute[230010]: 2025-11-24 09:59:59.834 230014 DEBUG nova.virt.libvirt.guest [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <nova:name>tempest-TestNetworkBasicOps-server-1468987490</nova:name>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <nova:creationTime>2025-11-24 09:59:59</nova:creationTime>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <nova:flavor name="m1.nano">
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:memory>128</nova:memory>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:disk>1</nova:disk>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:swap>0</nova:swap>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:ephemeral>0</nova:ephemeral>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:vcpus>1</nova:vcpus>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </nova:flavor>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <nova:owner>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </nova:owner>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   <nova:ports>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     <nova:port uuid="bf41c673-482b-42e3-ac98-475b716fa0e9">
Nov 24 09:59:59 compute-1 nova_compute[230010]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 24 09:59:59 compute-1 nova_compute[230010]:     </nova:port>
Nov 24 09:59:59 compute-1 nova_compute[230010]:   </nova:ports>
Nov 24 09:59:59 compute-1 nova_compute[230010]: </nova:instance>
Nov 24 09:59:59 compute-1 nova_compute[230010]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 24 10:00:00 compute-1 ceph-mon[80009]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Nov 24 10:00:00 compute-1 ceph-mon[80009]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Nov 24 10:00:00 compute-1 ceph-mon[80009]:     daemon nfs.cephfs.0.0.compute-1.vvoanr on compute-1 is in unknown state
Nov 24 10:00:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:00:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:00:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:00.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:00:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:01.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:00:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.025 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:72:0f 10.100.0.24', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': '62465e3c-a372-4121-8a2e-5e10d1c3faf6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cbb18554-4df6-4004-8b94-6d2a9b50722d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58766ea9-d6bf-4e11-9e8a-1652f6f7c4d5, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=2ad41fbf-b749-4394-9d14-483c127ff44c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 10:00:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.027 142336 INFO neutron.agent.ovn.metadata.agent [-] Port 2ad41fbf-b749-4394-9d14-483c127ff44c in datapath cbb18554-4df6-4004-8b94-6d2a9b50722d unbound from our chassis
Nov 24 10:00:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.028 142336 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cbb18554-4df6-4004-8b94-6d2a9b50722d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 10:00:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.033 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[80370e02-85d1-43b6-aa06-158cf1cc5b54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.035 142336 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d namespace which is not needed anymore
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.040 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:00:01 compute-1 ceph-mon[80009]: pgmap v996: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 2.1 KiB/s wr, 8 op/s
Nov 24 10:00:01 compute-1 neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d[239998]: [NOTICE]   (240002) : haproxy version is 2.8.14-c23fe91
Nov 24 10:00:01 compute-1 neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d[239998]: [NOTICE]   (240002) : path to executable is /usr/sbin/haproxy
Nov 24 10:00:01 compute-1 neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d[239998]: [WARNING]  (240002) : Exiting Master process...
Nov 24 10:00:01 compute-1 neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d[239998]: [ALERT]    (240002) : Current worker (240004) exited with code 143 (Terminated)
Nov 24 10:00:01 compute-1 neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d[239998]: [WARNING]  (240002) : All workers exited. Exiting... (0)
Nov 24 10:00:01 compute-1 systemd[1]: libpod-bafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b.scope: Deactivated successfully.
Nov 24 10:00:01 compute-1 podman[240508]: 2025-11-24 10:00:01.178676951 +0000 UTC m=+0.049525312 container died bafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 24 10:00:01 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b-userdata-shm.mount: Deactivated successfully.
Nov 24 10:00:01 compute-1 systemd[1]: var-lib-containers-storage-overlay-447107754eda77794034edf91920a06d35d4d1b91593ad5057e2c61b459718a4-merged.mount: Deactivated successfully.
Nov 24 10:00:01 compute-1 podman[240508]: 2025-11-24 10:00:01.237505869 +0000 UTC m=+0.108354200 container cleanup bafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 10:00:01 compute-1 systemd[1]: libpod-conmon-bafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b.scope: Deactivated successfully.
Nov 24 10:00:01 compute-1 podman[240537]: 2025-11-24 10:00:01.313365965 +0000 UTC m=+0.054347590 container remove bafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 24 10:00:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.320 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[e6888668-e64d-4177-87b8-b24216276ee2]: (4, ('Mon Nov 24 10:00:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d (bafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b)\nbafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b\nMon Nov 24 10:00:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d (bafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b)\nbafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.323 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[407e496f-f11d-4e2c-b60b-a2a2ea194c5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.324 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcbb18554-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:00:01 compute-1 kernel: tapcbb18554-40: left promiscuous mode
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.326 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.337 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.340 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[485323da-2229-4641-858a-2f8eb1209990]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.363 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[97f3a8af-c7c3-4953-950f-b15712d75e3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.364 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[dc36bdf8-6fab-4569-be22-709ca5d184e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.368 230014 DEBUG nova.compute.manager [req-c5d26cc7-b49e-42f4-b3cb-db63a71030de req-726ab591-07ad-42fa-ba14-629ee8e68a4a 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-vif-unplugged-2ad41fbf-b749-4394-9d14-483c127ff44c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.368 230014 DEBUG oslo_concurrency.lockutils [req-c5d26cc7-b49e-42f4-b3cb-db63a71030de req-726ab591-07ad-42fa-ba14-629ee8e68a4a 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.369 230014 DEBUG oslo_concurrency.lockutils [req-c5d26cc7-b49e-42f4-b3cb-db63a71030de req-726ab591-07ad-42fa-ba14-629ee8e68a4a 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.369 230014 DEBUG oslo_concurrency.lockutils [req-c5d26cc7-b49e-42f4-b3cb-db63a71030de req-726ab591-07ad-42fa-ba14-629ee8e68a4a 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.369 230014 DEBUG nova.compute.manager [req-c5d26cc7-b49e-42f4-b3cb-db63a71030de req-726ab591-07ad-42fa-ba14-629ee8e68a4a 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] No waiting events found dispatching network-vif-unplugged-2ad41fbf-b749-4394-9d14-483c127ff44c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.370 230014 WARNING nova.compute.manager [req-c5d26cc7-b49e-42f4-b3cb-db63a71030de req-726ab591-07ad-42fa-ba14-629ee8e68a4a 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received unexpected event network-vif-unplugged-2ad41fbf-b749-4394-9d14-483c127ff44c for instance with vm_state active and task_state None.
Nov 24 10:00:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.379 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[2f14dd6e-4873-4592-9c7f-219c9ac5df17]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425925, 'reachable_time': 29991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240553, 'error': None, 'target': 'ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.385 142476 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 24 10:00:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.386 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[a6dc2720-0b84-42d5-8f05-99b4b94c8420]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:01 compute-1 systemd[1]: run-netns-ovnmeta\x2dcbb18554\x2d4df6\x2d4004\x2d8b94\x2d6d2a9b50722d.mount: Deactivated successfully.
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.503 230014 DEBUG oslo_concurrency.lockutils [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.503 230014 DEBUG oslo_concurrency.lockutils [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.503 230014 DEBUG nova.network.neutron [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.590 230014 DEBUG nova.compute.manager [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-vif-deleted-2ad41fbf-b749-4394-9d14-483c127ff44c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.590 230014 INFO nova.compute.manager [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Neutron deleted interface 2ad41fbf-b749-4394-9d14-483c127ff44c; detaching it from the instance and deleting it from the info cache
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.590 230014 DEBUG nova.network.neutron [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updating instance_info_cache with network_info: [{"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.612 230014 DEBUG nova.objects.instance [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lazy-loading 'system_metadata' on Instance uuid 62465e3c-a372-4121-8a2e-5e10d1c3faf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.633 230014 DEBUG nova.objects.instance [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lazy-loading 'flavor' on Instance uuid 62465e3c-a372-4121-8a2e-5e10d1c3faf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.653 230014 DEBUG nova.virt.libvirt.vif [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:57:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1468987490',display_name='tempest-TestNetworkBasicOps-server-1468987490',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1468987490',id=6,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeLeNtgMDECCA396nl5/z6TsnAPH3kX9ECWzaWuLvptXvMaJaj/WlHKUFyFRR30PurvGrDvNN2g1Ij1pTu0Su2H0Am0Z6Y5TdOjAAQXOQr2HISwvDDFzD9t0aaelZEbhw==',key_name='tempest-TestNetworkBasicOps-1307688110',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:58:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-sy1yuug7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:58:05Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=62465e3c-a372-4121-8a2e-5e10d1c3faf6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.654 230014 DEBUG nova.network.os_vif_util [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Converting VIF {"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.654 230014 DEBUG nova.network.os_vif_util [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:df:72:0f,bridge_name='br-int',has_traffic_filtering=True,id=2ad41fbf-b749-4394-9d14-483c127ff44c,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad41fbf-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.660 230014 DEBUG nova.virt.libvirt.guest [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:df:72:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2ad41fbf-b7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.666 230014 DEBUG nova.virt.libvirt.guest [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:df:72:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2ad41fbf-b7"/></interface>not found in domain: <domain type='kvm' id='4'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <name>instance-00000006</name>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <uuid>62465e3c-a372-4121-8a2e-5e10d1c3faf6</uuid>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <metadata>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <nova:name>tempest-TestNetworkBasicOps-server-1468987490</nova:name>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <nova:creationTime>2025-11-24 09:59:59</nova:creationTime>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <nova:flavor name="m1.nano">
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:memory>128</nova:memory>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:disk>1</nova:disk>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:swap>0</nova:swap>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:ephemeral>0</nova:ephemeral>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:vcpus>1</nova:vcpus>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </nova:flavor>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <nova:owner>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </nova:owner>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <nova:ports>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:port uuid="bf41c673-482b-42e3-ac98-475b716fa0e9">
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </nova:port>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </nova:ports>
Nov 24 10:00:01 compute-1 nova_compute[230010]: </nova:instance>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </metadata>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <memory unit='KiB'>131072</memory>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <vcpu placement='static'>1</vcpu>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <resource>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <partition>/machine</partition>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </resource>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <sysinfo type='smbios'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <system>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <entry name='manufacturer'>RDO</entry>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <entry name='product'>OpenStack Compute</entry>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <entry name='serial'>62465e3c-a372-4121-8a2e-5e10d1c3faf6</entry>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <entry name='uuid'>62465e3c-a372-4121-8a2e-5e10d1c3faf6</entry>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <entry name='family'>Virtual Machine</entry>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </system>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </sysinfo>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <os>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <boot dev='hd'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <smbios mode='sysinfo'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </os>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <features>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <acpi/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <apic/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <vmcoreinfo state='on'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </features>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <cpu mode='custom' match='exact' check='full'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <vendor>AMD</vendor>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='x2apic'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='tsc-deadline'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='hypervisor'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='tsc_adjust'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='spec-ctrl'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='stibp'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='ssbd'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='cmp_legacy'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='overflow-recov'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='succor'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='ibrs'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='amd-ssbd'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='virt-ssbd'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='disable' name='lbrv'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='disable' name='tsc-scale'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='disable' name='vmcb-clean'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='disable' name='flushbyasid'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='disable' name='pause-filter'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='disable' name='pfthreshold'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='disable' name='xsaves'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='disable' name='svm'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='topoext'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='disable' name='npt'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='disable' name='nrip-save'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </cpu>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <clock offset='utc'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <timer name='pit' tickpolicy='delay'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <timer name='hpet' present='no'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </clock>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <on_poweroff>destroy</on_poweroff>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <on_reboot>restart</on_reboot>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <on_crash>destroy</on_crash>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <devices>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <disk type='network' device='disk'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <driver name='qemu' type='raw' cache='none'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <auth username='openstack'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:         <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       </auth>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <source protocol='rbd' name='vms/62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk' index='2'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:         <host name='192.168.122.100' port='6789'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:         <host name='192.168.122.102' port='6789'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:         <host name='192.168.122.101' port='6789'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       </source>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target dev='vda' bus='virtio'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='virtio-disk0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </disk>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <disk type='network' device='cdrom'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <driver name='qemu' type='raw' cache='none'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <auth username='openstack'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:         <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       </auth>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <source protocol='rbd' name='vms/62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk.config' index='1'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:         <host name='192.168.122.100' port='6789'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:         <host name='192.168.122.102' port='6789'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:         <host name='192.168.122.101' port='6789'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       </source>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target dev='sda' bus='sata'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <readonly/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='sata0-0-0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </disk>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='0' model='pcie-root'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pcie.0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='1' port='0x10'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.1'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='2' port='0x11'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.2'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='3' port='0x12'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.3'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='4' port='0x13'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.4'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='5' port='0x14'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.5'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='6' port='0x15'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.6'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='7' port='0x16'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.7'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='8' port='0x17'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.8'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='9' port='0x18'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.9'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='10' port='0x19'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.10'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='11' port='0x1a'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.11'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='12' port='0x1b'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.12'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='13' port='0x1c'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.13'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='14' port='0x1d'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.14'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='15' port='0x1e'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.15'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='16' port='0x1f'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.16'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='17' port='0x20'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.17'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='18' port='0x21'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.18'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='19' port='0x22'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.19'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='20' port='0x23'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.20'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='21' port='0x24'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.21'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='22' port='0x25'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.22'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='23' port='0x26'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.23'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='24' port='0x27'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.24'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='25' port='0x28'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.25'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-pci-bridge'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.26'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='usb'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='sata' index='0'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='ide'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <interface type='ethernet'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <mac address='fa:16:3e:99:a7:ce'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target dev='tapbf41c673-48'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model type='virtio'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <driver name='vhost' rx_queue_size='512'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <mtu size='1442'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='net0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </interface>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <serial type='pty'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <source path='/dev/pts/0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <log file='/var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/console.log' append='off'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target type='isa-serial' port='0'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:         <model name='isa-serial'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       </target>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='serial0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </serial>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <console type='pty' tty='/dev/pts/0'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <source path='/dev/pts/0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <log file='/var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/console.log' append='off'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target type='serial' port='0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='serial0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </console>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <input type='tablet' bus='usb'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='input0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='usb' bus='0' port='1'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </input>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <input type='mouse' bus='ps2'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='input1'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </input>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <input type='keyboard' bus='ps2'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='input2'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </input>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <listen type='address' address='::0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </graphics>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <audio id='1' type='none'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <video>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model type='virtio' heads='1' primary='yes'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='video0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </video>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <watchdog model='itco' action='reset'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='watchdog0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </watchdog>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <memballoon model='virtio'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <stats period='10'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='balloon0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </memballoon>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <rng model='virtio'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <backend model='random'>/dev/urandom</backend>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='rng0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </rng>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </devices>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <label>system_u:system_r:svirt_t:s0:c516,c926</label>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c516,c926</imagelabel>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </seclabel>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <label>+107:+107</label>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <imagelabel>+107:+107</imagelabel>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </seclabel>
Nov 24 10:00:01 compute-1 nova_compute[230010]: </domain>
Nov 24 10:00:01 compute-1 nova_compute[230010]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.667 230014 DEBUG nova.virt.libvirt.guest [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:df:72:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2ad41fbf-b7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.674 230014 DEBUG nova.virt.libvirt.guest [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:df:72:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2ad41fbf-b7"/></interface>not found in domain: <domain type='kvm' id='4'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <name>instance-00000006</name>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <uuid>62465e3c-a372-4121-8a2e-5e10d1c3faf6</uuid>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <metadata>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <nova:name>tempest-TestNetworkBasicOps-server-1468987490</nova:name>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <nova:creationTime>2025-11-24 09:59:59</nova:creationTime>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <nova:flavor name="m1.nano">
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:memory>128</nova:memory>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:disk>1</nova:disk>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:swap>0</nova:swap>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:ephemeral>0</nova:ephemeral>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:vcpus>1</nova:vcpus>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </nova:flavor>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <nova:owner>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </nova:owner>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <nova:ports>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:port uuid="bf41c673-482b-42e3-ac98-475b716fa0e9">
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </nova:port>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </nova:ports>
Nov 24 10:00:01 compute-1 nova_compute[230010]: </nova:instance>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </metadata>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <memory unit='KiB'>131072</memory>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <vcpu placement='static'>1</vcpu>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <resource>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <partition>/machine</partition>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </resource>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <sysinfo type='smbios'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <system>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <entry name='manufacturer'>RDO</entry>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <entry name='product'>OpenStack Compute</entry>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <entry name='serial'>62465e3c-a372-4121-8a2e-5e10d1c3faf6</entry>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <entry name='uuid'>62465e3c-a372-4121-8a2e-5e10d1c3faf6</entry>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <entry name='family'>Virtual Machine</entry>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </system>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </sysinfo>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <os>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <boot dev='hd'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <smbios mode='sysinfo'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </os>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <features>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <acpi/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <apic/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <vmcoreinfo state='on'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </features>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <cpu mode='custom' match='exact' check='full'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <vendor>AMD</vendor>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='x2apic'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='tsc-deadline'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='hypervisor'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='tsc_adjust'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='spec-ctrl'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='stibp'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='ssbd'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='cmp_legacy'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='overflow-recov'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='succor'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='ibrs'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='amd-ssbd'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='virt-ssbd'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='disable' name='lbrv'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='disable' name='tsc-scale'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='disable' name='vmcb-clean'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='disable' name='flushbyasid'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='disable' name='pause-filter'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='disable' name='pfthreshold'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='disable' name='xsaves'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='disable' name='svm'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='require' name='topoext'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='disable' name='npt'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <feature policy='disable' name='nrip-save'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </cpu>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <clock offset='utc'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <timer name='pit' tickpolicy='delay'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <timer name='hpet' present='no'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </clock>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <on_poweroff>destroy</on_poweroff>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <on_reboot>restart</on_reboot>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <on_crash>destroy</on_crash>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <devices>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <disk type='network' device='disk'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <driver name='qemu' type='raw' cache='none'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <auth username='openstack'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:         <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       </auth>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <source protocol='rbd' name='vms/62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk' index='2'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:         <host name='192.168.122.100' port='6789'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:         <host name='192.168.122.102' port='6789'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:         <host name='192.168.122.101' port='6789'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       </source>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target dev='vda' bus='virtio'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='virtio-disk0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </disk>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <disk type='network' device='cdrom'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <driver name='qemu' type='raw' cache='none'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <auth username='openstack'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:         <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       </auth>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <source protocol='rbd' name='vms/62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk.config' index='1'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:         <host name='192.168.122.100' port='6789'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:         <host name='192.168.122.102' port='6789'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:         <host name='192.168.122.101' port='6789'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       </source>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target dev='sda' bus='sata'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <readonly/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='sata0-0-0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </disk>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='0' model='pcie-root'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pcie.0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='1' port='0x10'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.1'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='2' port='0x11'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.2'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='3' port='0x12'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.3'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='4' port='0x13'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.4'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='5' port='0x14'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.5'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='6' port='0x15'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.6'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='7' port='0x16'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.7'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='8' port='0x17'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.8'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='9' port='0x18'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.9'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='10' port='0x19'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.10'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='11' port='0x1a'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.11'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='12' port='0x1b'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.12'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='13' port='0x1c'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.13'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='14' port='0x1d'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.14'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='15' port='0x1e'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.15'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='16' port='0x1f'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.16'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='17' port='0x20'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.17'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='18' port='0x21'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.18'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='19' port='0x22'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.19'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='20' port='0x23'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.20'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='21' port='0x24'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.21'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='22' port='0x25'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.22'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='23' port='0x26'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.23'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='24' port='0x27'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.24'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-root-port'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target chassis='25' port='0x28'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.25'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model name='pcie-pci-bridge'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='pci.26'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='usb'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <controller type='sata' index='0'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='ide'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </controller>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <interface type='ethernet'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <mac address='fa:16:3e:99:a7:ce'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target dev='tapbf41c673-48'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model type='virtio'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <driver name='vhost' rx_queue_size='512'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <mtu size='1442'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='net0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </interface>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <serial type='pty'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <source path='/dev/pts/0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <log file='/var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/console.log' append='off'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target type='isa-serial' port='0'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:         <model name='isa-serial'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       </target>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='serial0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </serial>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <console type='pty' tty='/dev/pts/0'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <source path='/dev/pts/0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <log file='/var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/console.log' append='off'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <target type='serial' port='0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='serial0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </console>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <input type='tablet' bus='usb'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='input0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='usb' bus='0' port='1'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </input>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <input type='mouse' bus='ps2'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='input1'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </input>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <input type='keyboard' bus='ps2'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='input2'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </input>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <listen type='address' address='::0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </graphics>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <audio id='1' type='none'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <video>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <model type='virtio' heads='1' primary='yes'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='video0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </video>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <watchdog model='itco' action='reset'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='watchdog0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </watchdog>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <memballoon model='virtio'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <stats period='10'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='balloon0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </memballoon>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <rng model='virtio'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <backend model='random'>/dev/urandom</backend>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <alias name='rng0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </rng>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </devices>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <label>system_u:system_r:svirt_t:s0:c516,c926</label>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c516,c926</imagelabel>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </seclabel>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <label>+107:+107</label>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <imagelabel>+107:+107</imagelabel>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </seclabel>
Nov 24 10:00:01 compute-1 nova_compute[230010]: </domain>
Nov 24 10:00:01 compute-1 nova_compute[230010]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.675 230014 WARNING nova.virt.libvirt.driver [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Detaching interface fa:16:3e:df:72:0f failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap2ad41fbf-b7' not found.
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.675 230014 DEBUG nova.virt.libvirt.vif [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:57:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1468987490',display_name='tempest-TestNetworkBasicOps-server-1468987490',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1468987490',id=6,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeLeNtgMDECCA396nl5/z6TsnAPH3kX9ECWzaWuLvptXvMaJaj/WlHKUFyFRR30PurvGrDvNN2g1Ij1pTu0Su2H0Am0Z6Y5TdOjAAQXOQr2HISwvDDFzD9t0aaelZEbhw==',key_name='tempest-TestNetworkBasicOps-1307688110',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:58:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-sy1yuug7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:58:05Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=62465e3c-a372-4121-8a2e-5e10d1c3faf6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.676 230014 DEBUG nova.network.os_vif_util [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Converting VIF {"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.676 230014 DEBUG nova.network.os_vif_util [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:df:72:0f,bridge_name='br-int',has_traffic_filtering=True,id=2ad41fbf-b749-4394-9d14-483c127ff44c,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad41fbf-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.676 230014 DEBUG os_vif [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:72:0f,bridge_name='br-int',has_traffic_filtering=True,id=2ad41fbf-b749-4394-9d14-483c127ff44c,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad41fbf-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.678 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.678 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ad41fbf-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.678 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.681 230014 INFO os_vif [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:72:0f,bridge_name='br-int',has_traffic_filtering=True,id=2ad41fbf-b749-4394-9d14-483c127ff44c,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad41fbf-b7')
Nov 24 10:00:01 compute-1 nova_compute[230010]: 2025-11-24 10:00:01.681 230014 DEBUG nova.virt.libvirt.guest [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <nova:name>tempest-TestNetworkBasicOps-server-1468987490</nova:name>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <nova:creationTime>2025-11-24 10:00:01</nova:creationTime>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <nova:flavor name="m1.nano">
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:memory>128</nova:memory>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:disk>1</nova:disk>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:swap>0</nova:swap>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:ephemeral>0</nova:ephemeral>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:vcpus>1</nova:vcpus>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </nova:flavor>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <nova:owner>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </nova:owner>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   <nova:ports>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     <nova:port uuid="bf41c673-482b-42e3-ac98-475b716fa0e9">
Nov 24 10:00:01 compute-1 nova_compute[230010]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 24 10:00:01 compute-1 nova_compute[230010]:     </nova:port>
Nov 24 10:00:01 compute-1 nova_compute[230010]:   </nova:ports>
Nov 24 10:00:01 compute-1 nova_compute[230010]: </nova:instance>
Nov 24 10:00:01 compute-1 nova_compute[230010]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 24 10:00:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/1474587093' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:00:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/1474587093' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:00:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:02.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:02 compute-1 nova_compute[230010]: 2025-11-24 10:00:02.748 230014 INFO nova.network.neutron [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Port 2ad41fbf-b749-4394-9d14-483c127ff44c from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 24 10:00:02 compute-1 nova_compute[230010]: 2025-11-24 10:00:02.749 230014 DEBUG nova.network.neutron [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updating instance_info_cache with network_info: [{"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:00:02 compute-1 nova_compute[230010]: 2025-11-24 10:00:02.763 230014 DEBUG oslo_concurrency.lockutils [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:00:02 compute-1 nova_compute[230010]: 2025-11-24 10:00:02.790 230014 DEBUG oslo_concurrency.lockutils [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "interface-62465e3c-a372-4121-8a2e-5e10d1c3faf6-2ad41fbf-b749-4394-9d14-483c127ff44c" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:03.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:03 compute-1 ovn_controller[132966]: 2025-11-24T10:00:03Z|00083|binding|INFO|Releasing lport 51ab5aa5-77bf-4bb7-993e-d15c7b4540ff from this chassis (sb_readonly=0)
Nov 24 10:00:03 compute-1 nova_compute[230010]: 2025-11-24 10:00:03.077 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:03 compute-1 ceph-mon[80009]: pgmap v997: 353 pgs: 353 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 4.8 KiB/s wr, 29 op/s
Nov 24 10:00:03 compute-1 nova_compute[230010]: 2025-11-24 10:00:03.449 230014 DEBUG nova.compute.manager [req-1dddba55-493e-484f-bce7-4329c06e4e37 req-880dddfd-0e79-4384-bf68-ae080f78b2fc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-vif-plugged-2ad41fbf-b749-4394-9d14-483c127ff44c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:00:03 compute-1 nova_compute[230010]: 2025-11-24 10:00:03.450 230014 DEBUG oslo_concurrency.lockutils [req-1dddba55-493e-484f-bce7-4329c06e4e37 req-880dddfd-0e79-4384-bf68-ae080f78b2fc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:03 compute-1 nova_compute[230010]: 2025-11-24 10:00:03.450 230014 DEBUG oslo_concurrency.lockutils [req-1dddba55-493e-484f-bce7-4329c06e4e37 req-880dddfd-0e79-4384-bf68-ae080f78b2fc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:03 compute-1 nova_compute[230010]: 2025-11-24 10:00:03.450 230014 DEBUG oslo_concurrency.lockutils [req-1dddba55-493e-484f-bce7-4329c06e4e37 req-880dddfd-0e79-4384-bf68-ae080f78b2fc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:03 compute-1 nova_compute[230010]: 2025-11-24 10:00:03.450 230014 DEBUG nova.compute.manager [req-1dddba55-493e-484f-bce7-4329c06e4e37 req-880dddfd-0e79-4384-bf68-ae080f78b2fc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] No waiting events found dispatching network-vif-plugged-2ad41fbf-b749-4394-9d14-483c127ff44c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:00:03 compute-1 nova_compute[230010]: 2025-11-24 10:00:03.451 230014 WARNING nova.compute.manager [req-1dddba55-493e-484f-bce7-4329c06e4e37 req-880dddfd-0e79-4384-bf68-ae080f78b2fc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received unexpected event network-vif-plugged-2ad41fbf-b749-4394-9d14-483c127ff44c for instance with vm_state active and task_state None.
Nov 24 10:00:03 compute-1 nova_compute[230010]: 2025-11-24 10:00:03.890 230014 DEBUG oslo_concurrency.lockutils [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:03 compute-1 nova_compute[230010]: 2025-11-24 10:00:03.891 230014 DEBUG oslo_concurrency.lockutils [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:03 compute-1 nova_compute[230010]: 2025-11-24 10:00:03.891 230014 DEBUG oslo_concurrency.lockutils [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:03 compute-1 nova_compute[230010]: 2025-11-24 10:00:03.891 230014 DEBUG oslo_concurrency.lockutils [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:03 compute-1 nova_compute[230010]: 2025-11-24 10:00:03.891 230014 DEBUG oslo_concurrency.lockutils [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:03 compute-1 nova_compute[230010]: 2025-11-24 10:00:03.893 230014 INFO nova.compute.manager [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Terminating instance
Nov 24 10:00:03 compute-1 nova_compute[230010]: 2025-11-24 10:00:03.894 230014 DEBUG nova.compute.manager [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 10:00:03 compute-1 kernel: tapbf41c673-48 (unregistering): left promiscuous mode
Nov 24 10:00:03 compute-1 NetworkManager[48870]: <info>  [1763978403.9430] device (tapbf41c673-48): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 10:00:03 compute-1 ovn_controller[132966]: 2025-11-24T10:00:03Z|00084|binding|INFO|Releasing lport bf41c673-482b-42e3-ac98-475b716fa0e9 from this chassis (sb_readonly=0)
Nov 24 10:00:03 compute-1 nova_compute[230010]: 2025-11-24 10:00:03.954 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:03 compute-1 ovn_controller[132966]: 2025-11-24T10:00:03Z|00085|binding|INFO|Setting lport bf41c673-482b-42e3-ac98-475b716fa0e9 down in Southbound
Nov 24 10:00:03 compute-1 ovn_controller[132966]: 2025-11-24T10:00:03Z|00086|binding|INFO|Removing iface tapbf41c673-48 ovn-installed in OVS
Nov 24 10:00:03 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:03.961 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:a7:ce 10.100.0.8'], port_security=['fa:16:3e:99:a7:ce 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '62465e3c-a372-4121-8a2e-5e10d1c3faf6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81f18750-9169-4587-b6ca-88a2bbc58afc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ebde3e26-b896-444f-b8ef-f2f39010ba47', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42f28b30-955e-4ea5-b415-d62763a6e220, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=bf41c673-482b-42e3-ac98-475b716fa0e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 10:00:03 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:03.964 142336 INFO neutron.agent.ovn.metadata.agent [-] Port bf41c673-482b-42e3-ac98-475b716fa0e9 in datapath 81f18750-9169-4587-b6ca-88a2bbc58afc unbound from our chassis
Nov 24 10:00:03 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:03.965 142336 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 81f18750-9169-4587-b6ca-88a2bbc58afc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 10:00:03 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:03.966 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f18980-f091-424f-92a3-cfa7bc900d66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:03 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:03.966 142336 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc namespace which is not needed anymore
Nov 24 10:00:03 compute-1 nova_compute[230010]: 2025-11-24 10:00:03.978 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:04 compute-1 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000006.scope: Deactivated successfully.
Nov 24 10:00:04 compute-1 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000006.scope: Consumed 19.209s CPU time.
Nov 24 10:00:04 compute-1 systemd-machined[193537]: Machine qemu-4-instance-00000006 terminated.
Nov 24 10:00:04 compute-1 neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc[239548]: [NOTICE]   (239552) : haproxy version is 2.8.14-c23fe91
Nov 24 10:00:04 compute-1 neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc[239548]: [NOTICE]   (239552) : path to executable is /usr/sbin/haproxy
Nov 24 10:00:04 compute-1 neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc[239548]: [WARNING]  (239552) : Exiting Master process...
Nov 24 10:00:04 compute-1 neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc[239548]: [ALERT]    (239552) : Current worker (239554) exited with code 143 (Terminated)
Nov 24 10:00:04 compute-1 neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc[239548]: [WARNING]  (239552) : All workers exited. Exiting... (0)
Nov 24 10:00:04 compute-1 systemd[1]: libpod-312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e.scope: Deactivated successfully.
Nov 24 10:00:04 compute-1 podman[240581]: 2025-11-24 10:00:04.111299157 +0000 UTC m=+0.049614314 container died 312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:00:04 compute-1 nova_compute[230010]: 2025-11-24 10:00:04.141 230014 INFO nova.virt.libvirt.driver [-] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Instance destroyed successfully.
Nov 24 10:00:04 compute-1 nova_compute[230010]: 2025-11-24 10:00:04.142 230014 DEBUG nova.objects.instance [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'resources' on Instance uuid 62465e3c-a372-4121-8a2e-5e10d1c3faf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 10:00:04 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e-userdata-shm.mount: Deactivated successfully.
Nov 24 10:00:04 compute-1 systemd[1]: var-lib-containers-storage-overlay-8077583c93206f4b50fb98a5f2ccb3fea2a970b30dff429250e8ff4a1f0a34dc-merged.mount: Deactivated successfully.
Nov 24 10:00:04 compute-1 nova_compute[230010]: 2025-11-24 10:00:04.155 230014 DEBUG nova.virt.libvirt.vif [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:57:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1468987490',display_name='tempest-TestNetworkBasicOps-server-1468987490',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1468987490',id=6,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeLeNtgMDECCA396nl5/z6TsnAPH3kX9ECWzaWuLvptXvMaJaj/WlHKUFyFRR30PurvGrDvNN2g1Ij1pTu0Su2H0Am0Z6Y5TdOjAAQXOQr2HISwvDDFzD9t0aaelZEbhw==',key_name='tempest-TestNetworkBasicOps-1307688110',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:58:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-sy1yuug7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:58:05Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=62465e3c-a372-4121-8a2e-5e10d1c3faf6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 10:00:04 compute-1 nova_compute[230010]: 2025-11-24 10:00:04.155 230014 DEBUG nova.network.os_vif_util [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 10:00:04 compute-1 nova_compute[230010]: 2025-11-24 10:00:04.156 230014 DEBUG nova.network.os_vif_util [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:99:a7:ce,bridge_name='br-int',has_traffic_filtering=True,id=bf41c673-482b-42e3-ac98-475b716fa0e9,network=Network(81f18750-9169-4587-b6ca-88a2bbc58afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf41c673-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 10:00:04 compute-1 nova_compute[230010]: 2025-11-24 10:00:04.156 230014 DEBUG os_vif [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:a7:ce,bridge_name='br-int',has_traffic_filtering=True,id=bf41c673-482b-42e3-ac98-475b716fa0e9,network=Network(81f18750-9169-4587-b6ca-88a2bbc58afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf41c673-48') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 10:00:04 compute-1 podman[240581]: 2025-11-24 10:00:04.157607639 +0000 UTC m=+0.095922776 container cleanup 312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 10:00:04 compute-1 nova_compute[230010]: 2025-11-24 10:00:04.160 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:04 compute-1 nova_compute[230010]: 2025-11-24 10:00:04.161 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf41c673-48, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:00:04 compute-1 nova_compute[230010]: 2025-11-24 10:00:04.163 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:04 compute-1 nova_compute[230010]: 2025-11-24 10:00:04.165 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:04 compute-1 nova_compute[230010]: 2025-11-24 10:00:04.168 230014 INFO os_vif [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:a7:ce,bridge_name='br-int',has_traffic_filtering=True,id=bf41c673-482b-42e3-ac98-475b716fa0e9,network=Network(81f18750-9169-4587-b6ca-88a2bbc58afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf41c673-48')
Nov 24 10:00:04 compute-1 systemd[1]: libpod-conmon-312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e.scope: Deactivated successfully.
Nov 24 10:00:04 compute-1 nova_compute[230010]: 2025-11-24 10:00:04.192 230014 DEBUG nova.compute.manager [req-7cc2c4f1-8d87-4f88-847e-e6ba03235c04 req-bb25cb6f-76e2-403d-93af-c7abbaa1b434 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-vif-unplugged-bf41c673-482b-42e3-ac98-475b716fa0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:00:04 compute-1 nova_compute[230010]: 2025-11-24 10:00:04.193 230014 DEBUG oslo_concurrency.lockutils [req-7cc2c4f1-8d87-4f88-847e-e6ba03235c04 req-bb25cb6f-76e2-403d-93af-c7abbaa1b434 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:04 compute-1 nova_compute[230010]: 2025-11-24 10:00:04.194 230014 DEBUG oslo_concurrency.lockutils [req-7cc2c4f1-8d87-4f88-847e-e6ba03235c04 req-bb25cb6f-76e2-403d-93af-c7abbaa1b434 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:04 compute-1 nova_compute[230010]: 2025-11-24 10:00:04.194 230014 DEBUG oslo_concurrency.lockutils [req-7cc2c4f1-8d87-4f88-847e-e6ba03235c04 req-bb25cb6f-76e2-403d-93af-c7abbaa1b434 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:04 compute-1 nova_compute[230010]: 2025-11-24 10:00:04.194 230014 DEBUG nova.compute.manager [req-7cc2c4f1-8d87-4f88-847e-e6ba03235c04 req-bb25cb6f-76e2-403d-93af-c7abbaa1b434 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] No waiting events found dispatching network-vif-unplugged-bf41c673-482b-42e3-ac98-475b716fa0e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:00:04 compute-1 nova_compute[230010]: 2025-11-24 10:00:04.194 230014 DEBUG nova.compute.manager [req-7cc2c4f1-8d87-4f88-847e-e6ba03235c04 req-bb25cb6f-76e2-403d-93af-c7abbaa1b434 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-vif-unplugged-bf41c673-482b-42e3-ac98-475b716fa0e9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 24 10:00:04 compute-1 podman[240622]: 2025-11-24 10:00:04.22352935 +0000 UTC m=+0.040809088 container remove 312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 24 10:00:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:04.230 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[9df17bde-0d51-4f81-93ca-ac873317f292]: (4, ('Mon Nov 24 10:00:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc (312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e)\n312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e\nMon Nov 24 10:00:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc (312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e)\n312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:04.233 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[90aea0bd-8267-4677-9c32-f3e33ea3e770]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:04.234 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81f18750-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:00:04 compute-1 kernel: tap81f18750-90: left promiscuous mode
Nov 24 10:00:04 compute-1 nova_compute[230010]: 2025-11-24 10:00:04.238 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:04 compute-1 nova_compute[230010]: 2025-11-24 10:00:04.252 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:04.255 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[e4894c0e-5464-4cb1-870c-6c6940c1c09b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:04.268 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[eca7c9f2-e966-413e-aba7-dff06c737b33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:04.270 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[6ebf6150-3d24-4f4d-a8fc-4b276ba0b2e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:04.284 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[1682351f-6e37-456e-8572-1974ee348156]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422808, 'reachable_time': 38724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240653, 'error': None, 'target': 'ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:04.286 142476 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 24 10:00:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:04.286 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[b740ddbc-c469-41fa-a77d-f0b0454ff82a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:04 compute-1 systemd[1]: run-netns-ovnmeta\x2d81f18750\x2d9169\x2d4587\x2db6ca\x2d88a2bbc58afc.mount: Deactivated successfully.
Nov 24 10:00:04 compute-1 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 24 10:00:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:00:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:04.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:00:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:00:04 compute-1 ceph-mon[80009]: pgmap v998: 353 pgs: 353 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Nov 24 10:00:04 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:04.741 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:00:04 compute-1 nova_compute[230010]: 2025-11-24 10:00:04.777 230014 INFO nova.virt.libvirt.driver [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Deleting instance files /var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6_del
Nov 24 10:00:04 compute-1 nova_compute[230010]: 2025-11-24 10:00:04.778 230014 INFO nova.virt.libvirt.driver [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Deletion of /var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6_del complete
Nov 24 10:00:04 compute-1 nova_compute[230010]: 2025-11-24 10:00:04.829 230014 INFO nova.compute.manager [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Took 0.93 seconds to destroy the instance on the hypervisor.
Nov 24 10:00:04 compute-1 nova_compute[230010]: 2025-11-24 10:00:04.830 230014 DEBUG oslo.service.loopingcall [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 10:00:04 compute-1 nova_compute[230010]: 2025-11-24 10:00:04.830 230014 DEBUG nova.compute.manager [-] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 10:00:04 compute-1 nova_compute[230010]: 2025-11-24 10:00:04.830 230014 DEBUG nova.network.neutron [-] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 10:00:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:05.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:05 compute-1 nova_compute[230010]: 2025-11-24 10:00:05.456 230014 DEBUG nova.network.neutron [-] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:00:05 compute-1 nova_compute[230010]: 2025-11-24 10:00:05.474 230014 INFO nova.compute.manager [-] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Took 0.64 seconds to deallocate network for instance.
Nov 24 10:00:05 compute-1 nova_compute[230010]: 2025-11-24 10:00:05.540 230014 DEBUG nova.compute.manager [req-c2daf1ec-e795-4eac-8a3d-8f25bb3776b5 req-4c598591-94dc-49a9-ac5a-f3764a657bdd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-changed-bf41c673-482b-42e3-ac98-475b716fa0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:00:05 compute-1 nova_compute[230010]: 2025-11-24 10:00:05.541 230014 DEBUG nova.compute.manager [req-c2daf1ec-e795-4eac-8a3d-8f25bb3776b5 req-4c598591-94dc-49a9-ac5a-f3764a657bdd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Refreshing instance network info cache due to event network-changed-bf41c673-482b-42e3-ac98-475b716fa0e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 10:00:05 compute-1 nova_compute[230010]: 2025-11-24 10:00:05.541 230014 DEBUG oslo_concurrency.lockutils [req-c2daf1ec-e795-4eac-8a3d-8f25bb3776b5 req-4c598591-94dc-49a9-ac5a-f3764a657bdd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:00:05 compute-1 nova_compute[230010]: 2025-11-24 10:00:05.541 230014 DEBUG oslo_concurrency.lockutils [req-c2daf1ec-e795-4eac-8a3d-8f25bb3776b5 req-4c598591-94dc-49a9-ac5a-f3764a657bdd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:00:05 compute-1 nova_compute[230010]: 2025-11-24 10:00:05.541 230014 DEBUG nova.network.neutron [req-c2daf1ec-e795-4eac-8a3d-8f25bb3776b5 req-4c598591-94dc-49a9-ac5a-f3764a657bdd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Refreshing network info cache for port bf41c673-482b-42e3-ac98-475b716fa0e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 10:00:05 compute-1 nova_compute[230010]: 2025-11-24 10:00:05.544 230014 DEBUG oslo_concurrency.lockutils [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:05 compute-1 nova_compute[230010]: 2025-11-24 10:00:05.544 230014 DEBUG oslo_concurrency.lockutils [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:05 compute-1 nova_compute[230010]: 2025-11-24 10:00:05.593 230014 DEBUG oslo_concurrency.processutils [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:00:05 compute-1 nova_compute[230010]: 2025-11-24 10:00:05.714 230014 DEBUG nova.network.neutron [req-c2daf1ec-e795-4eac-8a3d-8f25bb3776b5 req-4c598591-94dc-49a9-ac5a-f3764a657bdd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 10:00:06 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:00:06 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1218318765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:06 compute-1 nova_compute[230010]: 2025-11-24 10:00:06.041 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:06 compute-1 nova_compute[230010]: 2025-11-24 10:00:06.053 230014 DEBUG oslo_concurrency.processutils [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:00:06 compute-1 nova_compute[230010]: 2025-11-24 10:00:06.061 230014 DEBUG nova.compute.provider_tree [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:00:06 compute-1 nova_compute[230010]: 2025-11-24 10:00:06.068 230014 DEBUG nova.network.neutron [req-c2daf1ec-e795-4eac-8a3d-8f25bb3776b5 req-4c598591-94dc-49a9-ac5a-f3764a657bdd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:00:06 compute-1 nova_compute[230010]: 2025-11-24 10:00:06.080 230014 DEBUG nova.scheduler.client.report [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:00:06 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1218318765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:06 compute-1 nova_compute[230010]: 2025-11-24 10:00:06.087 230014 DEBUG oslo_concurrency.lockutils [req-c2daf1ec-e795-4eac-8a3d-8f25bb3776b5 req-4c598591-94dc-49a9-ac5a-f3764a657bdd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:00:06 compute-1 nova_compute[230010]: 2025-11-24 10:00:06.088 230014 DEBUG nova.compute.manager [req-c2daf1ec-e795-4eac-8a3d-8f25bb3776b5 req-4c598591-94dc-49a9-ac5a-f3764a657bdd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-vif-deleted-bf41c673-482b-42e3-ac98-475b716fa0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:00:06 compute-1 nova_compute[230010]: 2025-11-24 10:00:06.100 230014 DEBUG oslo_concurrency.lockutils [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:06 compute-1 nova_compute[230010]: 2025-11-24 10:00:06.123 230014 INFO nova.scheduler.client.report [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Deleted allocations for instance 62465e3c-a372-4121-8a2e-5e10d1c3faf6
Nov 24 10:00:06 compute-1 nova_compute[230010]: 2025-11-24 10:00:06.185 230014 DEBUG oslo_concurrency.lockutils [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.294s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:06 compute-1 nova_compute[230010]: 2025-11-24 10:00:06.263 230014 DEBUG nova.compute.manager [req-083be48e-6895-47c2-a8c7-53c909a14133 req-896a1dd0-a19f-46f6-9af1-aa01d5c832a8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-vif-plugged-bf41c673-482b-42e3-ac98-475b716fa0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:00:06 compute-1 nova_compute[230010]: 2025-11-24 10:00:06.264 230014 DEBUG oslo_concurrency.lockutils [req-083be48e-6895-47c2-a8c7-53c909a14133 req-896a1dd0-a19f-46f6-9af1-aa01d5c832a8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:06 compute-1 nova_compute[230010]: 2025-11-24 10:00:06.264 230014 DEBUG oslo_concurrency.lockutils [req-083be48e-6895-47c2-a8c7-53c909a14133 req-896a1dd0-a19f-46f6-9af1-aa01d5c832a8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:06 compute-1 nova_compute[230010]: 2025-11-24 10:00:06.264 230014 DEBUG oslo_concurrency.lockutils [req-083be48e-6895-47c2-a8c7-53c909a14133 req-896a1dd0-a19f-46f6-9af1-aa01d5c832a8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:06 compute-1 nova_compute[230010]: 2025-11-24 10:00:06.265 230014 DEBUG nova.compute.manager [req-083be48e-6895-47c2-a8c7-53c909a14133 req-896a1dd0-a19f-46f6-9af1-aa01d5c832a8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] No waiting events found dispatching network-vif-plugged-bf41c673-482b-42e3-ac98-475b716fa0e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:00:06 compute-1 nova_compute[230010]: 2025-11-24 10:00:06.265 230014 WARNING nova.compute.manager [req-083be48e-6895-47c2-a8c7-53c909a14133 req-896a1dd0-a19f-46f6-9af1-aa01d5c832a8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received unexpected event network-vif-plugged-bf41c673-482b-42e3-ac98-475b716fa0e9 for instance with vm_state deleted and task_state None.
Nov 24 10:00:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:00:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:06.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:00:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:00:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:07.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:00:07 compute-1 ceph-mon[80009]: pgmap v999: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 5.7 KiB/s wr, 57 op/s
Nov 24 10:00:07 compute-1 sudo[240679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:00:07 compute-1 sudo[240679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:00:07 compute-1 sudo[240679]: pam_unix(sudo:session): session closed for user root
Nov 24 10:00:07 compute-1 podman[240703]: 2025-11-24 10:00:07.669823715 +0000 UTC m=+0.073907819 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 24 10:00:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:08.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:08 compute-1 nova_compute[230010]: 2025-11-24 10:00:08.951 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:08 compute-1 nova_compute[230010]: 2025-11-24 10:00:08.992 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:09.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:09 compute-1 nova_compute[230010]: 2025-11-24 10:00:09.163 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:00:09 compute-1 ceph-mon[80009]: pgmap v1000: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.0 KiB/s wr, 49 op/s
Nov 24 10:00:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:10.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:10 compute-1 ceph-mon[80009]: pgmap v1001: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.0 KiB/s wr, 49 op/s
Nov 24 10:00:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:11.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:11 compute-1 nova_compute[230010]: 2025-11-24 10:00:11.044 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:00:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:12.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:00:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:00:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:13.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:00:13 compute-1 podman[240728]: 2025-11-24 10:00:13.364068964 +0000 UTC m=+0.102078125 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 10:00:13 compute-1 ceph-mon[80009]: pgmap v1002: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.0 KiB/s wr, 50 op/s
Nov 24 10:00:14 compute-1 nova_compute[230010]: 2025-11-24 10:00:14.166 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:00:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:14.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:00:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:00:14 compute-1 ceph-mon[80009]: pgmap v1003: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Nov 24 10:00:14 compute-1 nova_compute[230010]: 2025-11-24 10:00:14.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:00:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:15.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:00:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:00:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:00:16 compute-1 nova_compute[230010]: 2025-11-24 10:00:16.046 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:16.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:16 compute-1 ceph-mon[80009]: pgmap v1004: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.615767) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978416615861, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1152, "num_deletes": 502, "total_data_size": 1864338, "memory_usage": 1900464, "flush_reason": "Manual Compaction"}
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978416623446, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 916203, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30138, "largest_seqno": 31285, "table_properties": {"data_size": 911957, "index_size": 1386, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13933, "raw_average_key_size": 19, "raw_value_size": 901005, "raw_average_value_size": 1261, "num_data_blocks": 60, "num_entries": 714, "num_filter_entries": 714, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763978354, "oldest_key_time": 1763978354, "file_creation_time": 1763978416, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 7702 microseconds, and 3340 cpu microseconds.
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.623486) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 916203 bytes OK
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.623506) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.625062) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.625075) EVENT_LOG_v1 {"time_micros": 1763978416625071, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.625093) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1857716, prev total WAL file size 1857716, number of live WAL files 2.
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.625893) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(894KB)], [57(16MB)]
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978416625953, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 18037846, "oldest_snapshot_seqno": -1}
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 5796 keys, 12163508 bytes, temperature: kUnknown
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978416687938, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 12163508, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12126898, "index_size": 21012, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 149986, "raw_average_key_size": 25, "raw_value_size": 12024311, "raw_average_value_size": 2074, "num_data_blocks": 839, "num_entries": 5796, "num_filter_entries": 5796, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763978416, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.688471) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 12163508 bytes
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.692674) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 290.3 rd, 195.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 16.3 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(33.0) write-amplify(13.3) OK, records in: 6796, records dropped: 1000 output_compression: NoCompression
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.692715) EVENT_LOG_v1 {"time_micros": 1763978416692697, "job": 34, "event": "compaction_finished", "compaction_time_micros": 62139, "compaction_time_cpu_micros": 27692, "output_level": 6, "num_output_files": 1, "total_output_size": 12163508, "num_input_records": 6796, "num_output_records": 5796, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978416693279, "job": 34, "event": "table_file_deletion", "file_number": 59}
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978416700249, "job": 34, "event": "table_file_deletion", "file_number": 57}
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.625775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.700383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.700392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.700395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.700397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:00:16 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.700439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:00:16 compute-1 nova_compute[230010]: 2025-11-24 10:00:16.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:00:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:17.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:17 compute-1 nova_compute[230010]: 2025-11-24 10:00:17.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:00:17 compute-1 nova_compute[230010]: 2025-11-24 10:00:17.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:00:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:18.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:19.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:19 compute-1 nova_compute[230010]: 2025-11-24 10:00:19.139 230014 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763978404.132737, 62465e3c-a372-4121-8a2e-5e10d1c3faf6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:00:19 compute-1 nova_compute[230010]: 2025-11-24 10:00:19.141 230014 INFO nova.compute.manager [-] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] VM Stopped (Lifecycle Event)
Nov 24 10:00:19 compute-1 nova_compute[230010]: 2025-11-24 10:00:19.169 230014 DEBUG nova.compute.manager [None req-82e2906b-b0c5-411a-b6d4-35d47615bbb1 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:00:19 compute-1 nova_compute[230010]: 2025-11-24 10:00:19.170 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:00:19 compute-1 ceph-mon[80009]: pgmap v1005: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:00:19 compute-1 nova_compute[230010]: 2025-11-24 10:00:19.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:00:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:20.062 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:20.062 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:20.062 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:20.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:20 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3007477117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:20 compute-1 ceph-mon[80009]: pgmap v1006: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:00:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:21.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:21 compute-1 nova_compute[230010]: 2025-11-24 10:00:21.049 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:21 compute-1 podman[240759]: 2025-11-24 10:00:21.324958178 +0000 UTC m=+0.066627779 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 24 10:00:21 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/397015019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:21 compute-1 nova_compute[230010]: 2025-11-24 10:00:21.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:00:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:00:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:22.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:00:22 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/492539947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:22 compute-1 ceph-mon[80009]: pgmap v1007: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:00:22 compute-1 nova_compute[230010]: 2025-11-24 10:00:22.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:00:22 compute-1 nova_compute[230010]: 2025-11-24 10:00:22.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:00:22 compute-1 nova_compute[230010]: 2025-11-24 10:00:22.783 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:22 compute-1 nova_compute[230010]: 2025-11-24 10:00:22.783 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:22 compute-1 nova_compute[230010]: 2025-11-24 10:00:22.783 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:22 compute-1 nova_compute[230010]: 2025-11-24 10:00:22.783 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:00:22 compute-1 nova_compute[230010]: 2025-11-24 10:00:22.784 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:00:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:23.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:23 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:00:23 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2206353367' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:23 compute-1 nova_compute[230010]: 2025-11-24 10:00:23.237 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:00:23 compute-1 nova_compute[230010]: 2025-11-24 10:00:23.410 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:00:23 compute-1 nova_compute[230010]: 2025-11-24 10:00:23.412 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4975MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:00:23 compute-1 nova_compute[230010]: 2025-11-24 10:00:23.413 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:23 compute-1 nova_compute[230010]: 2025-11-24 10:00:23.413 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:23 compute-1 nova_compute[230010]: 2025-11-24 10:00:23.469 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:00:23 compute-1 nova_compute[230010]: 2025-11-24 10:00:23.469 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:00:23 compute-1 nova_compute[230010]: 2025-11-24 10:00:23.488 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:00:23 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2622083022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:23 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2206353367' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:23 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:00:23 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1856495997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:23 compute-1 nova_compute[230010]: 2025-11-24 10:00:23.959 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:00:23 compute-1 nova_compute[230010]: 2025-11-24 10:00:23.966 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:00:23 compute-1 nova_compute[230010]: 2025-11-24 10:00:23.978 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:00:23 compute-1 nova_compute[230010]: 2025-11-24 10:00:23.996 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:00:23 compute-1 nova_compute[230010]: 2025-11-24 10:00:23.997 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:24 compute-1 nova_compute[230010]: 2025-11-24 10:00:24.175 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:00:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:24.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:24 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1856495997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:24 compute-1 ceph-mon[80009]: pgmap v1008: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:00:24 compute-1 nova_compute[230010]: 2025-11-24 10:00:24.992 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:00:24 compute-1 nova_compute[230010]: 2025-11-24 10:00:24.992 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:00:24 compute-1 nova_compute[230010]: 2025-11-24 10:00:24.993 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:00:24 compute-1 nova_compute[230010]: 2025-11-24 10:00:24.993 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:00:25 compute-1 nova_compute[230010]: 2025-11-24 10:00:25.004 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:00:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:25.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:26 compute-1 nova_compute[230010]: 2025-11-24 10:00:26.052 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:26.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:26 compute-1 ceph-mon[80009]: pgmap v1009: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:00:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:00:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:27.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:00:27 compute-1 sudo[240825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:00:27 compute-1 sudo[240825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:00:27 compute-1 sudo[240825]: pam_unix(sudo:session): session closed for user root
Nov 24 10:00:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:28.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:28 compute-1 ceph-mon[80009]: pgmap v1010: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:00:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:29.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:29 compute-1 nova_compute[230010]: 2025-11-24 10:00:29.178 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:00:29 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2796252596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:29 compute-1 sudo[240851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:00:29 compute-1 sudo[240851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:00:29 compute-1 sudo[240851]: pam_unix(sudo:session): session closed for user root
Nov 24 10:00:29 compute-1 sudo[240876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:00:29 compute-1 sudo[240876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:00:30 compute-1 sudo[240876]: pam_unix(sudo:session): session closed for user root
Nov 24 10:00:30 compute-1 sudo[240933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:00:30 compute-1 sudo[240933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:00:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:00:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:00:30 compute-1 sudo[240933]: pam_unix(sudo:session): session closed for user root
Nov 24 10:00:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:00:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:30.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:00:30 compute-1 sudo[240958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Nov 24 10:00:30 compute-1 sudo[240958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:00:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:00:30 compute-1 ceph-mon[80009]: pgmap v1011: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:00:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:00:30 compute-1 sudo[240958]: pam_unix(sudo:session): session closed for user root
Nov 24 10:00:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 10:00:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:00:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 10:00:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 24 10:00:30 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 10:00:31 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 10:00:31 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 10:00:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:00:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:31.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:00:31 compute-1 nova_compute[230010]: 2025-11-24 10:00:31.053 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:31 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:00:31 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:00:31 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 10:00:31 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:00:31 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:00:31 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:00:31 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 10:00:31 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:00:31 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 10:00:31 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:00:31 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:00:31 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:00:31 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:31 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:31 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 10:00:31 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 10:00:31 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:31 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:31 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:00:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:00:31 compute-1 ceph-mon[80009]: pgmap v1012: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:00:31 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:31 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:00:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:00:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:00:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:32.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:33.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:34 compute-1 ceph-mon[80009]: pgmap v1013: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:00:34 compute-1 nova_compute[230010]: 2025-11-24 10:00:34.181 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:00:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:34.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:35.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:35 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2929576563' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:00:35 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:00:35 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:00:35 compute-1 sudo[241004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:00:35 compute-1 sudo[241004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:00:35 compute-1 sudo[241004]: pam_unix(sudo:session): session closed for user root
Nov 24 10:00:36 compute-1 nova_compute[230010]: 2025-11-24 10:00:36.056 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:36 compute-1 ceph-mon[80009]: pgmap v1014: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:00:36 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2075905231' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:00:36 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:36 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:36.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:37.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:38 compute-1 ceph-mon[80009]: pgmap v1015: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:00:38 compute-1 podman[241031]: 2025-11-24 10:00:38.355072613 +0000 UTC m=+0.087553271 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 10:00:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:00:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:38.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:00:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:39.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:39 compute-1 nova_compute[230010]: 2025-11-24 10:00:39.182 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:00:40 compute-1 ceph-mon[80009]: pgmap v1016: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:00:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:40.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:41 compute-1 nova_compute[230010]: 2025-11-24 10:00:41.058 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:41.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:42 compute-1 ceph-mon[80009]: pgmap v1017: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:00:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:42.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:00:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:43.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:00:44 compute-1 nova_compute[230010]: 2025-11-24 10:00:44.185 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:44 compute-1 podman[241055]: 2025-11-24 10:00:44.3736038 +0000 UTC m=+0.101476462 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 10:00:44 compute-1 ceph-mon[80009]: pgmap v1018: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 24 10:00:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:00:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:44.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:00:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:45.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:00:45 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/4136452768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:00:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:00:46 compute-1 nova_compute[230010]: 2025-11-24 10:00:46.061 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:46 compute-1 ceph-mon[80009]: pgmap v1019: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 24 10:00:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:00:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:46.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:47.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:47 compute-1 sudo[241082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:00:47 compute-1 sudo[241082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:00:47 compute-1 sudo[241082]: pam_unix(sudo:session): session closed for user root
Nov 24 10:00:48 compute-1 ceph-mon[80009]: pgmap v1020: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 24 10:00:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:00:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:48.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:00:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:49.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:49 compute-1 nova_compute[230010]: 2025-11-24 10:00:49.186 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:00:50 compute-1 ceph-mon[80009]: pgmap v1021: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 24 10:00:50 compute-1 ovn_controller[132966]: 2025-11-24T10:00:50Z|00087|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Nov 24 10:00:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:50.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:51 compute-1 nova_compute[230010]: 2025-11-24 10:00:51.063 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:51.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:52 compute-1 podman[241110]: 2025-11-24 10:00:52.32737537 +0000 UTC m=+0.061516495 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 10:00:52 compute-1 ceph-mon[80009]: pgmap v1022: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 24 10:00:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:52.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:53.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:54 compute-1 nova_compute[230010]: 2025-11-24 10:00:54.188 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:00:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:54.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:54 compute-1 ceph-mon[80009]: pgmap v1023: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 24 10:00:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:55.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:55 compute-1 sshd-session[241131]: Connection closed by 164.92.213.168 port 32966
Nov 24 10:00:56 compute-1 nova_compute[230010]: 2025-11-24 10:00:56.064 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:56 compute-1 ceph-mon[80009]: pgmap v1024: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 24 10:00:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:56.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:57.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:58 compute-1 ceph-mon[80009]: pgmap v1025: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 24 10:00:58 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/201546860' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:58.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:00:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:00:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:59.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:00:59 compute-1 nova_compute[230010]: 2025-11-24 10:00:59.189 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:00:59 compute-1 nova_compute[230010]: 2025-11-24 10:00:59.886 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:59 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:59.886 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 10:00:59 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:59.887 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 10:00:59 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:00:59.888 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:01:00 compute-1 ceph-mon[80009]: pgmap v1026: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:01:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:01:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:01:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:00.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:01 compute-1 nova_compute[230010]: 2025-11-24 10:01:01.066 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:01 compute-1 CROND[241136]: (root) CMD (run-parts /etc/cron.hourly)
Nov 24 10:01:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:01.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:01 compute-1 run-parts[241139]: (/etc/cron.hourly) starting 0anacron
Nov 24 10:01:01 compute-1 run-parts[241145]: (/etc/cron.hourly) finished 0anacron
Nov 24 10:01:01 compute-1 CROND[241135]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 24 10:01:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:01:02 compute-1 ceph-mon[80009]: pgmap v1027: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:01:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/951297493' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:01:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/951297493' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:01:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:02.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:03.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:03 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1491936105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:01:04 compute-1 nova_compute[230010]: 2025-11-24 10:01:04.193 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:04 compute-1 ceph-mon[80009]: pgmap v1028: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:01:04 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1956899645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:01:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:01:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:04.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:05.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:06 compute-1 nova_compute[230010]: 2025-11-24 10:01:06.067 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:06 compute-1 ceph-mon[80009]: pgmap v1029: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:01:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:06.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:07.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:07 compute-1 sudo[241150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:01:07 compute-1 sudo[241150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:01:07 compute-1 sudo[241150]: pam_unix(sudo:session): session closed for user root
Nov 24 10:01:08 compute-1 ceph-mon[80009]: pgmap v1030: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 24 10:01:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:08.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:09.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:09 compute-1 nova_compute[230010]: 2025-11-24 10:01:09.252 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:09 compute-1 podman[241176]: 2025-11-24 10:01:09.36836112 +0000 UTC m=+0.081360813 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 10:01:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:01:10 compute-1 ceph-mon[80009]: pgmap v1031: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 24 10:01:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:10.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:11 compute-1 nova_compute[230010]: 2025-11-24 10:01:11.069 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:11.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:12 compute-1 ceph-mon[80009]: pgmap v1032: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 24 10:01:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:12.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:13.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:13 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/376239360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:01:14 compute-1 nova_compute[230010]: 2025-11-24 10:01:14.253 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:01:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:14.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:14 compute-1 ceph-mon[80009]: pgmap v1033: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Nov 24 10:01:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:15.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:15 compute-1 podman[241202]: 2025-11-24 10:01:15.339137704 +0000 UTC m=+0.078319219 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 10:01:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:01:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:01:15 compute-1 ceph-mon[80009]: pgmap v1034: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 24 10:01:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:01:15 compute-1 nova_compute[230010]: 2025-11-24 10:01:15.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:01:16 compute-1 nova_compute[230010]: 2025-11-24 10:01:16.093 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:16.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:16 compute-1 nova_compute[230010]: 2025-11-24 10:01:16.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:01:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:17.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:18 compute-1 ceph-mon[80009]: pgmap v1035: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 24 10:01:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:18.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:19.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:19 compute-1 nova_compute[230010]: 2025-11-24 10:01:19.256 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:01:19 compute-1 nova_compute[230010]: 2025-11-24 10:01:19.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:01:19 compute-1 nova_compute[230010]: 2025-11-24 10:01:19.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:01:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:01:20.063 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:01:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:01:20.064 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:01:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:01:20.064 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:01:20 compute-1 ceph-mon[80009]: pgmap v1036: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 96 op/s
Nov 24 10:01:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:20.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:21.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:21 compute-1 nova_compute[230010]: 2025-11-24 10:01:21.141 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:21 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1254106227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:01:21 compute-1 nova_compute[230010]: 2025-11-24 10:01:21.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:01:22 compute-1 ceph-mon[80009]: pgmap v1037: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 96 op/s
Nov 24 10:01:22 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1213079402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:01:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:22.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:22 compute-1 nova_compute[230010]: 2025-11-24 10:01:22.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:01:22 compute-1 nova_compute[230010]: 2025-11-24 10:01:22.786 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:01:22 compute-1 nova_compute[230010]: 2025-11-24 10:01:22.786 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:01:22 compute-1 nova_compute[230010]: 2025-11-24 10:01:22.786 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:01:22 compute-1 nova_compute[230010]: 2025-11-24 10:01:22.786 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:01:22 compute-1 nova_compute[230010]: 2025-11-24 10:01:22.787 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:01:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:23.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:23 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:01:23 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1862029681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:01:23 compute-1 nova_compute[230010]: 2025-11-24 10:01:23.215 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:01:23 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1862029681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:01:23 compute-1 podman[241256]: 2025-11-24 10:01:23.333423945 +0000 UTC m=+0.078976065 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 10:01:23 compute-1 nova_compute[230010]: 2025-11-24 10:01:23.384 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:01:23 compute-1 nova_compute[230010]: 2025-11-24 10:01:23.385 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4960MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:01:23 compute-1 nova_compute[230010]: 2025-11-24 10:01:23.385 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:01:23 compute-1 nova_compute[230010]: 2025-11-24 10:01:23.385 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:01:23 compute-1 nova_compute[230010]: 2025-11-24 10:01:23.436 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:01:23 compute-1 nova_compute[230010]: 2025-11-24 10:01:23.436 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:01:23 compute-1 nova_compute[230010]: 2025-11-24 10:01:23.451 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:01:23 compute-1 sshd-session[241276]: Connection closed by 80.94.92.165 port 50140
Nov 24 10:01:23 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:01:23 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/518376627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:01:23 compute-1 nova_compute[230010]: 2025-11-24 10:01:23.909 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:01:23 compute-1 nova_compute[230010]: 2025-11-24 10:01:23.918 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:01:23 compute-1 nova_compute[230010]: 2025-11-24 10:01:23.951 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:01:23 compute-1 nova_compute[230010]: 2025-11-24 10:01:23.953 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:01:23 compute-1 nova_compute[230010]: 2025-11-24 10:01:23.953 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:01:24 compute-1 ceph-mon[80009]: pgmap v1038: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 96 op/s
Nov 24 10:01:24 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/518376627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:01:24 compute-1 nova_compute[230010]: 2025-11-24 10:01:24.257 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:01:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:24.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:24 compute-1 nova_compute[230010]: 2025-11-24 10:01:24.954 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:01:24 compute-1 nova_compute[230010]: 2025-11-24 10:01:24.954 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:01:24 compute-1 nova_compute[230010]: 2025-11-24 10:01:24.970 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:01:24 compute-1 nova_compute[230010]: 2025-11-24 10:01:24.970 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:01:24 compute-1 nova_compute[230010]: 2025-11-24 10:01:24.970 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:01:24 compute-1 nova_compute[230010]: 2025-11-24 10:01:24.984 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:01:24 compute-1 nova_compute[230010]: 2025-11-24 10:01:24.984 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:01:24 compute-1 nova_compute[230010]: 2025-11-24 10:01:24.984 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:01:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:25.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:25 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2072489161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:01:25 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3938711372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:01:26 compute-1 nova_compute[230010]: 2025-11-24 10:01:26.143 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:26 compute-1 ceph-mon[80009]: pgmap v1039: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:01:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:26.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:27.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:27 compute-1 sudo[241301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:01:27 compute-1 sudo[241301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:01:27 compute-1 sudo[241301]: pam_unix(sudo:session): session closed for user root
Nov 24 10:01:28 compute-1 ceph-mon[80009]: pgmap v1040: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:01:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:28.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:29.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:29 compute-1 nova_compute[230010]: 2025-11-24 10:01:29.260 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:01:30 compute-1 ceph-mon[80009]: pgmap v1041: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:01:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:01:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:01:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:30.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:31 compute-1 nova_compute[230010]: 2025-11-24 10:01:31.145 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:31.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:01:32 compute-1 ceph-mon[80009]: pgmap v1042: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:01:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:32.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:33.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:34 compute-1 nova_compute[230010]: 2025-11-24 10:01:34.261 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:34 compute-1 ceph-mon[80009]: pgmap v1043: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:01:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:01:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:34.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:35.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:36 compute-1 sudo[241331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:01:36 compute-1 sudo[241331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:01:36 compute-1 sudo[241331]: pam_unix(sudo:session): session closed for user root
Nov 24 10:01:36 compute-1 nova_compute[230010]: 2025-11-24 10:01:36.146 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:36 compute-1 sudo[241356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 24 10:01:36 compute-1 sudo[241356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:01:36 compute-1 ceph-mon[80009]: pgmap v1044: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:01:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:36.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:36 compute-1 podman[241455]: 2025-11-24 10:01:36.729149787 +0000 UTC m=+0.057255933 container exec fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:01:36 compute-1 podman[241455]: 2025-11-24 10:01:36.827753091 +0000 UTC m=+0.155859237 container exec_died fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:01:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:37.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:37 compute-1 podman[241596]: 2025-11-24 10:01:37.307736074 +0000 UTC m=+0.050622480 container exec 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 10:01:37 compute-1 podman[241596]: 2025-11-24 10:01:37.318667562 +0000 UTC m=+0.061553948 container exec_died 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 10:01:37 compute-1 podman[241715]: 2025-11-24 10:01:37.696877943 +0000 UTC m=+0.062496412 container exec 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 10:01:37 compute-1 podman[241737]: 2025-11-24 10:01:37.763546796 +0000 UTC m=+0.050900248 container exec_died 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 10:01:37 compute-1 podman[241715]: 2025-11-24 10:01:37.776759469 +0000 UTC m=+0.142377958 container exec_died 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 10:01:38 compute-1 podman[241782]: 2025-11-24 10:01:38.00302162 +0000 UTC m=+0.065954286 container exec b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, name=keepalived, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, distribution-scope=public, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., com.redhat.component=keepalived-container, release=1793)
Nov 24 10:01:38 compute-1 podman[241782]: 2025-11-24 10:01:38.022932937 +0000 UTC m=+0.085865623 container exec_died b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, com.redhat.component=keepalived-container, distribution-scope=public, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, architecture=x86_64, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git)
Nov 24 10:01:38 compute-1 sudo[241356]: pam_unix(sudo:session): session closed for user root
Nov 24 10:01:38 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 10:01:38 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 10:01:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:38.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:38 compute-1 sudo[241817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:01:38 compute-1 sudo[241817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:01:38 compute-1 sudo[241817]: pam_unix(sudo:session): session closed for user root
Nov 24 10:01:38 compute-1 sudo[241842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:01:38 compute-1 sudo[241842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:01:38 compute-1 ceph-mon[80009]: pgmap v1045: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:01:38 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2501652805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:01:38 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:01:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:39.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:39 compute-1 sudo[241842]: pam_unix(sudo:session): session closed for user root
Nov 24 10:01:39 compute-1 nova_compute[230010]: 2025-11-24 10:01:39.263 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 24 10:01:39 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 10:01:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:01:39 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:01:39 compute-1 ceph-mon[80009]: pgmap v1046: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:01:39 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 10:01:39 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 10:01:40 compute-1 podman[241899]: 2025-11-24 10:01:40.326330568 +0000 UTC m=+0.066565360 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 24 10:01:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:40.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:40 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 10:01:40 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 10:01:41 compute-1 nova_compute[230010]: 2025-11-24 10:01:41.148 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:41.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:41 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 24 10:01:41 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 10:01:41 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:01:41 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:01:41 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 10:01:41 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:01:41 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:01:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:01:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:42.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:01:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:43.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:44 compute-1 nova_compute[230010]: 2025-11-24 10:01:44.265 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:01:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:44.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:45.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:01:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:01:46 compute-1 nova_compute[230010]: 2025-11-24 10:01:46.150 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:46 compute-1 podman[241922]: 2025-11-24 10:01:46.413265533 +0000 UTC m=+0.138016170 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 24 10:01:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:46.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:46 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:01:46 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:01:46 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:01:46 compute-1 ceph-mon[80009]: pgmap v1047: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:01:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 10:01:46 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 10:01:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:01:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:01:46 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 10:01:46 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:01:46 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 10:01:46 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:01:46 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:01:46 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:01:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:47.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:47 compute-1 ceph-mon[80009]: pgmap v1048: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 32 op/s
Nov 24 10:01:47 compute-1 ceph-mon[80009]: pgmap v1049: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:01:47 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:01:47 compute-1 ceph-mon[80009]: pgmap v1050: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:01:47 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:01:47 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:01:47 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:01:47 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:01:47 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:01:47 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2817330001' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:01:47 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/984696211' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:01:48 compute-1 sudo[241949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:01:48 compute-1 sudo[241949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:01:48 compute-1 sudo[241949]: pam_unix(sudo:session): session closed for user root
Nov 24 10:01:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:48.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:48 compute-1 ceph-mon[80009]: pgmap v1051: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:01:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:49.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:49 compute-1 nova_compute[230010]: 2025-11-24 10:01:49.268 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:01:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:50.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:50 compute-1 ceph-mon[80009]: pgmap v1052: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:01:51 compute-1 nova_compute[230010]: 2025-11-24 10:01:51.152 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:51.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:51 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:01:51 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:01:52 compute-1 sudo[241976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:01:52 compute-1 sudo[241976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:01:52 compute-1 sudo[241976]: pam_unix(sudo:session): session closed for user root
Nov 24 10:01:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:52.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:52 compute-1 ceph-mon[80009]: pgmap v1053: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.0 MiB/s wr, 96 op/s
Nov 24 10:01:52 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:01:52 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:01:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:53.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:54 compute-1 nova_compute[230010]: 2025-11-24 10:01:54.269 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:54 compute-1 podman[242002]: 2025-11-24 10:01:54.324690414 +0000 UTC m=+0.061882086 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 24 10:01:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:01:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:54.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:54 compute-1 ceph-mon[80009]: pgmap v1054: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 58 op/s
Nov 24 10:01:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:55.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:56 compute-1 nova_compute[230010]: 2025-11-24 10:01:56.153 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:56.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:56 compute-1 ceph-mon[80009]: pgmap v1055: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 58 op/s
Nov 24 10:01:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:57.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:58.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:58 compute-1 ceph-mon[80009]: pgmap v1056: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Nov 24 10:01:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:01:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:59.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:59 compute-1 nova_compute[230010]: 2025-11-24 10:01:59.272 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:02:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:02:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:02:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:00.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:00 compute-1 ceph-mon[80009]: pgmap v1057: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 24 10:02:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:02:01 compute-1 nova_compute[230010]: 2025-11-24 10:02:01.154 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:01.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:02.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:02 compute-1 ceph-mon[80009]: pgmap v1058: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Nov 24 10:02:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/2570801218' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:02:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/2570801218' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:02:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:03.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:04 compute-1 nova_compute[230010]: 2025-11-24 10:02:04.275 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:02:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:02:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:04.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:02:04 compute-1 ceph-mon[80009]: pgmap v1059: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 525 KiB/s rd, 18 op/s
Nov 24 10:02:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:05.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:06 compute-1 nova_compute[230010]: 2025-11-24 10:02:06.156 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:06.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:06 compute-1 ceph-mon[80009]: pgmap v1060: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 525 KiB/s rd, 18 op/s
Nov 24 10:02:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:07.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:08 compute-1 sudo[242028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:02:08 compute-1 sudo[242028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:02:08 compute-1 sudo[242028]: pam_unix(sudo:session): session closed for user root
Nov 24 10:02:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:08.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:08 compute-1 ceph-mon[80009]: pgmap v1061: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 907 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Nov 24 10:02:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:09.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:09 compute-1 nova_compute[230010]: 2025-11-24 10:02:09.277 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:02:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:02:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:10.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:02:10 compute-1 ceph-mon[80009]: pgmap v1062: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 386 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 24 10:02:11 compute-1 nova_compute[230010]: 2025-11-24 10:02:11.158 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:11.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:11 compute-1 podman[242054]: 2025-11-24 10:02:11.317086457 +0000 UTC m=+0.055960171 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 24 10:02:11 compute-1 nova_compute[230010]: 2025-11-24 10:02:11.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:02:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:12.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:12 compute-1 ceph-mon[80009]: pgmap v1063: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 24 10:02:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:13.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:14 compute-1 nova_compute[230010]: 2025-11-24 10:02:14.278 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:02:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:14.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:15 compute-1 ceph-mon[80009]: pgmap v1064: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 24 10:02:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:15.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:02:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:02:15 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:02:15.737 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 10:02:15 compute-1 nova_compute[230010]: 2025-11-24 10:02:15.737 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:15 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:02:15.738 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 10:02:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:02:16 compute-1 nova_compute[230010]: 2025-11-24 10:02:16.161 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:16.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:16 compute-1 nova_compute[230010]: 2025-11-24 10:02:16.777 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:02:17 compute-1 ceph-mon[80009]: pgmap v1065: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 24 10:02:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:17.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:17 compute-1 podman[242077]: 2025-11-24 10:02:17.357115517 +0000 UTC m=+0.093525642 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 10:02:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:18.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:18 compute-1 nova_compute[230010]: 2025-11-24 10:02:18.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:02:19 compute-1 ceph-mon[80009]: pgmap v1066: 353 pgs: 353 active+clean; 121 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 386 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 24 10:02:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:19.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:19 compute-1 nova_compute[230010]: 2025-11-24 10:02:19.324 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:02:20 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/4189311379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:02:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:02:20.065 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:02:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:02:20.065 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:02:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:02:20.065 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:02:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:20.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:20 compute-1 nova_compute[230010]: 2025-11-24 10:02:20.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:02:20 compute-1 nova_compute[230010]: 2025-11-24 10:02:20.764 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:02:21 compute-1 ceph-mon[80009]: pgmap v1067: 353 pgs: 353 active+clean; 121 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 18 KiB/s wr, 2 op/s
Nov 24 10:02:21 compute-1 nova_compute[230010]: 2025-11-24 10:02:21.163 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:21.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:21 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:02:21.740 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:02:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:22.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:23 compute-1 ceph-mon[80009]: pgmap v1068: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 19 KiB/s wr, 30 op/s
Nov 24 10:02:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:02:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:23.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:02:23 compute-1 nova_compute[230010]: 2025-11-24 10:02:23.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:02:23 compute-1 nova_compute[230010]: 2025-11-24 10:02:23.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:02:23 compute-1 nova_compute[230010]: 2025-11-24 10:02:23.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 24 10:02:23 compute-1 nova_compute[230010]: 2025-11-24 10:02:23.777 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 24 10:02:24 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2520763458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:02:24 compute-1 nova_compute[230010]: 2025-11-24 10:02:24.325 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:02:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:02:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:24.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:02:24 compute-1 nova_compute[230010]: 2025-11-24 10:02:24.772 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:02:24 compute-1 nova_compute[230010]: 2025-11-24 10:02:24.772 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:02:24 compute-1 nova_compute[230010]: 2025-11-24 10:02:24.772 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:02:24 compute-1 nova_compute[230010]: 2025-11-24 10:02:24.795 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:02:24 compute-1 nova_compute[230010]: 2025-11-24 10:02:24.795 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:02:24 compute-1 nova_compute[230010]: 2025-11-24 10:02:24.796 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:02:24 compute-1 nova_compute[230010]: 2025-11-24 10:02:24.796 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:02:24 compute-1 nova_compute[230010]: 2025-11-24 10:02:24.796 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:02:25 compute-1 ceph-mon[80009]: pgmap v1069: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Nov 24 10:02:25 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/281032063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:02:25 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2457659003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:02:25 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:02:25 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3755230538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:02:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:25.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:25 compute-1 nova_compute[230010]: 2025-11-24 10:02:25.231 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:02:25 compute-1 podman[242130]: 2025-11-24 10:02:25.314259567 +0000 UTC m=+0.049924163 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 10:02:25 compute-1 nova_compute[230010]: 2025-11-24 10:02:25.374 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:02:25 compute-1 nova_compute[230010]: 2025-11-24 10:02:25.375 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4961MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:02:25 compute-1 nova_compute[230010]: 2025-11-24 10:02:25.375 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:02:25 compute-1 nova_compute[230010]: 2025-11-24 10:02:25.375 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:02:25 compute-1 nova_compute[230010]: 2025-11-24 10:02:25.496 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:02:25 compute-1 nova_compute[230010]: 2025-11-24 10:02:25.496 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:02:25 compute-1 nova_compute[230010]: 2025-11-24 10:02:25.554 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:02:26 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:02:26 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3722196526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:02:26 compute-1 nova_compute[230010]: 2025-11-24 10:02:26.050 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:02:26 compute-1 nova_compute[230010]: 2025-11-24 10:02:26.059 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:02:26 compute-1 nova_compute[230010]: 2025-11-24 10:02:26.073 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:02:26 compute-1 nova_compute[230010]: 2025-11-24 10:02:26.076 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:02:26 compute-1 nova_compute[230010]: 2025-11-24 10:02:26.076 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:02:26 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/614630583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:02:26 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3755230538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:02:26 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3722196526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:02:26 compute-1 nova_compute[230010]: 2025-11-24 10:02:26.166 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:26.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:27 compute-1 nova_compute[230010]: 2025-11-24 10:02:27.070 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:02:27 compute-1 nova_compute[230010]: 2025-11-24 10:02:27.070 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:02:27 compute-1 nova_compute[230010]: 2025-11-24 10:02:27.070 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:02:27 compute-1 nova_compute[230010]: 2025-11-24 10:02:27.083 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:02:27 compute-1 nova_compute[230010]: 2025-11-24 10:02:27.083 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:02:27 compute-1 ceph-mon[80009]: pgmap v1070: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Nov 24 10:02:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:27.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:27 compute-1 nova_compute[230010]: 2025-11-24 10:02:27.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:02:27 compute-1 nova_compute[230010]: 2025-11-24 10:02:27.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 24 10:02:28 compute-1 sudo[242173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:02:28 compute-1 sudo[242173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:02:28 compute-1 sudo[242173]: pam_unix(sudo:session): session closed for user root
Nov 24 10:02:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:02:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:28.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:02:29 compute-1 ceph-mon[80009]: pgmap v1071: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Nov 24 10:02:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:02:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:29.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:02:29 compute-1 nova_compute[230010]: 2025-11-24 10:02:29.327 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:02:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:02:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:02:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:30.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:31 compute-1 nova_compute[230010]: 2025-11-24 10:02:31.170 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:31 compute-1 ceph-mon[80009]: pgmap v1072: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 24 10:02:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:02:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:31.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:32.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:33 compute-1 ceph-mon[80009]: pgmap v1073: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 24 10:02:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:33.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:34 compute-1 nova_compute[230010]: 2025-11-24 10:02:34.362 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:02:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:34.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:35 compute-1 ceph-mon[80009]: pgmap v1074: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:02:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:35.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:36 compute-1 nova_compute[230010]: 2025-11-24 10:02:36.171 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:36.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:37 compute-1 ceph-mon[80009]: pgmap v1075: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:02:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:37.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:38 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3448334267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:02:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:02:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:38.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:02:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:39.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:39 compute-1 ceph-mon[80009]: pgmap v1076: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:02:39 compute-1 nova_compute[230010]: 2025-11-24 10:02:39.364 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:02:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:40.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:41 compute-1 nova_compute[230010]: 2025-11-24 10:02:41.172 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:41.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:41 compute-1 ceph-mon[80009]: pgmap v1077: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:02:42 compute-1 podman[242205]: 2025-11-24 10:02:42.329843425 +0000 UTC m=+0.072235440 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 10:02:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:42.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:43.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:43 compute-1 ceph-mon[80009]: pgmap v1078: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:02:44 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2006428667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:02:44 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1474341597' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:02:44 compute-1 nova_compute[230010]: 2025-11-24 10:02:44.366 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:02:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:44.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:02:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:45.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:02:45 compute-1 ceph-mon[80009]: pgmap v1079: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:02:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:02:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:02:46 compute-1 nova_compute[230010]: 2025-11-24 10:02:46.177 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:02:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:02:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:46.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:02:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:02:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:47.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:02:47 compute-1 ceph-mon[80009]: pgmap v1080: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:02:48 compute-1 sudo[242235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:02:48 compute-1 sudo[242235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:02:48 compute-1 sudo[242235]: pam_unix(sudo:session): session closed for user root
Nov 24 10:02:48 compute-1 podman[242228]: 2025-11-24 10:02:48.379313505 +0000 UTC m=+0.110069166 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 24 10:02:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:48.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:49.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:49 compute-1 ceph-mon[80009]: pgmap v1081: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Nov 24 10:02:49 compute-1 nova_compute[230010]: 2025-11-24 10:02:49.410 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:02:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:50.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:51 compute-1 nova_compute[230010]: 2025-11-24 10:02:51.178 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:51.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:51 compute-1 ceph-mon[80009]: pgmap v1082: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Nov 24 10:02:52 compute-1 sudo[242282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:02:52 compute-1 sudo[242282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:02:52 compute-1 sudo[242282]: pam_unix(sudo:session): session closed for user root
Nov 24 10:02:52 compute-1 sudo[242307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:02:52 compute-1 sudo[242307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:02:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:52.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:52 compute-1 sudo[242307]: pam_unix(sudo:session): session closed for user root
Nov 24 10:02:52 compute-1 nova_compute[230010]: 2025-11-24 10:02:52.875 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "16f34aac-788f-4079-9636-0db2c8de6422" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:02:52 compute-1 nova_compute[230010]: 2025-11-24 10:02:52.877 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:02:52 compute-1 nova_compute[230010]: 2025-11-24 10:02:52.894 230014 DEBUG nova.compute.manager [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 10:02:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:02:52 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:02:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 10:02:52 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:02:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:02:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:02:53 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 10:02:53 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:02:53 compute-1 nova_compute[230010]: 2025-11-24 10:02:53.017 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:02:53 compute-1 nova_compute[230010]: 2025-11-24 10:02:53.018 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:02:53 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 10:02:53 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:02:53 compute-1 nova_compute[230010]: 2025-11-24 10:02:53.025 230014 DEBUG nova.virt.hardware [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 10:02:53 compute-1 nova_compute[230010]: 2025-11-24 10:02:53.025 230014 INFO nova.compute.claims [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Claim successful on node compute-1.ctlplane.example.com
Nov 24 10:02:53 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:02:53 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:02:53 compute-1 nova_compute[230010]: 2025-11-24 10:02:53.142 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:02:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:53.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:53 compute-1 ceph-mon[80009]: pgmap v1083: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 24 10:02:53 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:02:53 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:02:53 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:02:53 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:02:53 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:02:53 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:02:53 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:02:53 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:02:53 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4175873145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:02:53 compute-1 nova_compute[230010]: 2025-11-24 10:02:53.632 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:02:53 compute-1 nova_compute[230010]: 2025-11-24 10:02:53.638 230014 DEBUG nova.compute.provider_tree [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:02:53 compute-1 nova_compute[230010]: 2025-11-24 10:02:53.652 230014 DEBUG nova.scheduler.client.report [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:02:53 compute-1 nova_compute[230010]: 2025-11-24 10:02:53.673 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:02:53 compute-1 nova_compute[230010]: 2025-11-24 10:02:53.674 230014 DEBUG nova.compute.manager [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 10:02:53 compute-1 nova_compute[230010]: 2025-11-24 10:02:53.734 230014 DEBUG nova.compute.manager [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 10:02:53 compute-1 nova_compute[230010]: 2025-11-24 10:02:53.735 230014 DEBUG nova.network.neutron [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 10:02:53 compute-1 nova_compute[230010]: 2025-11-24 10:02:53.760 230014 INFO nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 10:02:53 compute-1 nova_compute[230010]: 2025-11-24 10:02:53.782 230014 DEBUG nova.compute.manager [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 10:02:53 compute-1 nova_compute[230010]: 2025-11-24 10:02:53.876 230014 DEBUG nova.compute.manager [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 10:02:53 compute-1 nova_compute[230010]: 2025-11-24 10:02:53.878 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 10:02:53 compute-1 nova_compute[230010]: 2025-11-24 10:02:53.878 230014 INFO nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Creating image(s)
Nov 24 10:02:53 compute-1 nova_compute[230010]: 2025-11-24 10:02:53.904 230014 DEBUG nova.storage.rbd_utils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 16f34aac-788f-4079-9636-0db2c8de6422_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:02:53 compute-1 nova_compute[230010]: 2025-11-24 10:02:53.931 230014 DEBUG nova.storage.rbd_utils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 16f34aac-788f-4079-9636-0db2c8de6422_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:02:53 compute-1 nova_compute[230010]: 2025-11-24 10:02:53.957 230014 DEBUG nova.storage.rbd_utils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 16f34aac-788f-4079-9636-0db2c8de6422_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:02:53 compute-1 nova_compute[230010]: 2025-11-24 10:02:53.962 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:02:54 compute-1 nova_compute[230010]: 2025-11-24 10:02:54.035 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:02:54 compute-1 nova_compute[230010]: 2025-11-24 10:02:54.036 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "2ed5c667523487159c4c4503c82babbc95dbae40" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:02:54 compute-1 nova_compute[230010]: 2025-11-24 10:02:54.037 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:02:54 compute-1 nova_compute[230010]: 2025-11-24 10:02:54.037 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:02:54 compute-1 nova_compute[230010]: 2025-11-24 10:02:54.059 230014 DEBUG nova.storage.rbd_utils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 16f34aac-788f-4079-9636-0db2c8de6422_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:02:54 compute-1 nova_compute[230010]: 2025-11-24 10:02:54.064 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 16f34aac-788f-4079-9636-0db2c8de6422_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:02:54 compute-1 nova_compute[230010]: 2025-11-24 10:02:54.323 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 16f34aac-788f-4079-9636-0db2c8de6422_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:02:54 compute-1 nova_compute[230010]: 2025-11-24 10:02:54.409 230014 DEBUG nova.storage.rbd_utils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] resizing rbd image 16f34aac-788f-4079-9636-0db2c8de6422_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 24 10:02:54 compute-1 ceph-mon[80009]: pgmap v1084: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 13 KiB/s wr, 80 op/s
Nov 24 10:02:54 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/4175873145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:02:54 compute-1 nova_compute[230010]: 2025-11-24 10:02:54.451 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:02:54 compute-1 nova_compute[230010]: 2025-11-24 10:02:54.536 230014 DEBUG nova.policy [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '43f79ff3105e4372a3c095e8057d4f1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '94d069fc040647d5a6e54894eec915fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 10:02:54 compute-1 nova_compute[230010]: 2025-11-24 10:02:54.546 230014 DEBUG nova.objects.instance [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'migration_context' on Instance uuid 16f34aac-788f-4079-9636-0db2c8de6422 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 10:02:54 compute-1 nova_compute[230010]: 2025-11-24 10:02:54.559 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 10:02:54 compute-1 nova_compute[230010]: 2025-11-24 10:02:54.560 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Ensure instance console log exists: /var/lib/nova/instances/16f34aac-788f-4079-9636-0db2c8de6422/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 10:02:54 compute-1 nova_compute[230010]: 2025-11-24 10:02:54.560 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:02:54 compute-1 nova_compute[230010]: 2025-11-24 10:02:54.561 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:02:54 compute-1 nova_compute[230010]: 2025-11-24 10:02:54.561 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:02:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:54.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:55.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:56 compute-1 nova_compute[230010]: 2025-11-24 10:02:56.180 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:56 compute-1 podman[242554]: 2025-11-24 10:02:56.318702703 +0000 UTC m=+0.056357651 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 10:02:56 compute-1 nova_compute[230010]: 2025-11-24 10:02:56.521 230014 DEBUG nova.network.neutron [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Successfully created port: 99ae7646-7560-4043-bead-b1665083257c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 24 10:02:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:02:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:56.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:02:57 compute-1 ceph-mon[80009]: pgmap v1085: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 13 KiB/s wr, 80 op/s
Nov 24 10:02:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:02:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:57.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:02:57 compute-1 nova_compute[230010]: 2025-11-24 10:02:57.783 230014 DEBUG nova.network.neutron [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Successfully updated port: 99ae7646-7560-4043-bead-b1665083257c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 10:02:57 compute-1 nova_compute[230010]: 2025-11-24 10:02:57.799 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-16f34aac-788f-4079-9636-0db2c8de6422" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:02:57 compute-1 nova_compute[230010]: 2025-11-24 10:02:57.799 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-16f34aac-788f-4079-9636-0db2c8de6422" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:02:57 compute-1 nova_compute[230010]: 2025-11-24 10:02:57.799 230014 DEBUG nova.network.neutron [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 10:02:57 compute-1 nova_compute[230010]: 2025-11-24 10:02:57.872 230014 DEBUG nova.compute.manager [req-b1ce6229-6cf4-49b0-9571-225463bf2b16 req-26514d08-eb9d-4065-bb32-3fdfe8063604 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Received event network-changed-99ae7646-7560-4043-bead-b1665083257c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:02:57 compute-1 nova_compute[230010]: 2025-11-24 10:02:57.873 230014 DEBUG nova.compute.manager [req-b1ce6229-6cf4-49b0-9571-225463bf2b16 req-26514d08-eb9d-4065-bb32-3fdfe8063604 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Refreshing instance network info cache due to event network-changed-99ae7646-7560-4043-bead-b1665083257c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 10:02:57 compute-1 nova_compute[230010]: 2025-11-24 10:02:57.873 230014 DEBUG oslo_concurrency.lockutils [req-b1ce6229-6cf4-49b0-9571-225463bf2b16 req-26514d08-eb9d-4065-bb32-3fdfe8063604 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-16f34aac-788f-4079-9636-0db2c8de6422" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:02:58 compute-1 ceph-mon[80009]: pgmap v1086: 353 pgs: 353 active+clean; 134 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 109 op/s
Nov 24 10:02:58 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:02:58 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:02:58 compute-1 nova_compute[230010]: 2025-11-24 10:02:58.466 230014 DEBUG nova.network.neutron [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 10:02:58 compute-1 sudo[242575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:02:58 compute-1 sudo[242575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:02:58 compute-1 sudo[242575]: pam_unix(sudo:session): session closed for user root
Nov 24 10:02:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:58.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:02:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:59.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:59 compute-1 nova_compute[230010]: 2025-11-24 10:02:59.453 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:59 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:02:59 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:02:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:02:59 compute-1 nova_compute[230010]: 2025-11-24 10:02:59.520 230014 DEBUG nova.network.neutron [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Updating instance_info_cache with network_info: [{"id": "99ae7646-7560-4043-bead-b1665083257c", "address": "fa:16:3e:30:f8:b9", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99ae7646-75", "ovs_interfaceid": "99ae7646-7560-4043-bead-b1665083257c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:02:59 compute-1 nova_compute[230010]: 2025-11-24 10:02:59.537 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-16f34aac-788f-4079-9636-0db2c8de6422" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:02:59 compute-1 nova_compute[230010]: 2025-11-24 10:02:59.537 230014 DEBUG nova.compute.manager [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Instance network_info: |[{"id": "99ae7646-7560-4043-bead-b1665083257c", "address": "fa:16:3e:30:f8:b9", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99ae7646-75", "ovs_interfaceid": "99ae7646-7560-4043-bead-b1665083257c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 10:02:59 compute-1 nova_compute[230010]: 2025-11-24 10:02:59.537 230014 DEBUG oslo_concurrency.lockutils [req-b1ce6229-6cf4-49b0-9571-225463bf2b16 req-26514d08-eb9d-4065-bb32-3fdfe8063604 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-16f34aac-788f-4079-9636-0db2c8de6422" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:02:59 compute-1 nova_compute[230010]: 2025-11-24 10:02:59.538 230014 DEBUG nova.network.neutron [req-b1ce6229-6cf4-49b0-9571-225463bf2b16 req-26514d08-eb9d-4065-bb32-3fdfe8063604 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Refreshing network info cache for port 99ae7646-7560-4043-bead-b1665083257c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 10:02:59 compute-1 nova_compute[230010]: 2025-11-24 10:02:59.540 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Start _get_guest_xml network_info=[{"id": "99ae7646-7560-4043-bead-b1665083257c", "address": "fa:16:3e:30:f8:b9", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99ae7646-75", "ovs_interfaceid": "99ae7646-7560-4043-bead-b1665083257c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '6ef14bdf-4f04-4400-8040-4409d9d5271e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 10:02:59 compute-1 nova_compute[230010]: 2025-11-24 10:02:59.545 230014 WARNING nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:02:59 compute-1 nova_compute[230010]: 2025-11-24 10:02:59.563 230014 DEBUG nova.virt.libvirt.host [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 10:02:59 compute-1 nova_compute[230010]: 2025-11-24 10:02:59.564 230014 DEBUG nova.virt.libvirt.host [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 10:02:59 compute-1 nova_compute[230010]: 2025-11-24 10:02:59.571 230014 DEBUG nova.virt.libvirt.host [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 10:02:59 compute-1 nova_compute[230010]: 2025-11-24 10:02:59.571 230014 DEBUG nova.virt.libvirt.host [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 10:02:59 compute-1 nova_compute[230010]: 2025-11-24 10:02:59.572 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 10:02:59 compute-1 nova_compute[230010]: 2025-11-24 10:02:59.572 230014 DEBUG nova.virt.hardware [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T09:52:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='4a5d03ad-925b-45f1-89bd-f1325f9f3292',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 10:02:59 compute-1 nova_compute[230010]: 2025-11-24 10:02:59.573 230014 DEBUG nova.virt.hardware [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 10:02:59 compute-1 nova_compute[230010]: 2025-11-24 10:02:59.574 230014 DEBUG nova.virt.hardware [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 10:02:59 compute-1 nova_compute[230010]: 2025-11-24 10:02:59.574 230014 DEBUG nova.virt.hardware [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 10:02:59 compute-1 nova_compute[230010]: 2025-11-24 10:02:59.575 230014 DEBUG nova.virt.hardware [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 10:02:59 compute-1 nova_compute[230010]: 2025-11-24 10:02:59.575 230014 DEBUG nova.virt.hardware [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 10:02:59 compute-1 nova_compute[230010]: 2025-11-24 10:02:59.576 230014 DEBUG nova.virt.hardware [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 10:02:59 compute-1 nova_compute[230010]: 2025-11-24 10:02:59.576 230014 DEBUG nova.virt.hardware [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 10:02:59 compute-1 nova_compute[230010]: 2025-11-24 10:02:59.577 230014 DEBUG nova.virt.hardware [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 10:02:59 compute-1 nova_compute[230010]: 2025-11-24 10:02:59.577 230014 DEBUG nova.virt.hardware [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 10:02:59 compute-1 nova_compute[230010]: 2025-11-24 10:02:59.578 230014 DEBUG nova.virt.hardware [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 10:02:59 compute-1 nova_compute[230010]: 2025-11-24 10:02:59.583 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:03:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 10:03:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/139457227' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.041 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.069 230014 DEBUG nova.storage.rbd_utils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 16f34aac-788f-4079-9636-0db2c8de6422_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.074 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:03:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:03:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:03:00 compute-1 ceph-mon[80009]: pgmap v1087: 353 pgs: 353 active+clean; 134 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 95 op/s
Nov 24 10:03:00 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/139457227' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:03:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:03:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 10:03:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3946666677' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.530 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.532 230014 DEBUG nova.virt.libvirt.vif [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T10:02:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2027370088',display_name='tempest-TestNetworkBasicOps-server-2027370088',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2027370088',id=12,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGm7yTWaQNPdMC8QFfBuLjQRH6ApcYu+qgaY7VksG3yV1HCE4jpliKx7D8r+sNe/kvB8dUvGyFVNy/wUcpBDiRvylUupCj2Y07y6yC0JXN3khCgh2GMBQWQ7Dhz5WIb2PQ==',key_name='tempest-TestNetworkBasicOps-2017683419',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-jq31848e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T10:02:53Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=16f34aac-788f-4079-9636-0db2c8de6422,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "99ae7646-7560-4043-bead-b1665083257c", "address": "fa:16:3e:30:f8:b9", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99ae7646-75", "ovs_interfaceid": "99ae7646-7560-4043-bead-b1665083257c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.533 230014 DEBUG nova.network.os_vif_util [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "99ae7646-7560-4043-bead-b1665083257c", "address": "fa:16:3e:30:f8:b9", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99ae7646-75", "ovs_interfaceid": "99ae7646-7560-4043-bead-b1665083257c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.535 230014 DEBUG nova.network.os_vif_util [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=99ae7646-7560-4043-bead-b1665083257c,network=Network(d9ce2622-5822-4ecf-9fb9-f5f15c8ea094),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99ae7646-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.536 230014 DEBUG nova.objects.instance [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'pci_devices' on Instance uuid 16f34aac-788f-4079-9636-0db2c8de6422 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 10:03:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:00.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.914 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] End _get_guest_xml xml=<domain type="kvm">
Nov 24 10:03:00 compute-1 nova_compute[230010]:   <uuid>16f34aac-788f-4079-9636-0db2c8de6422</uuid>
Nov 24 10:03:00 compute-1 nova_compute[230010]:   <name>instance-0000000c</name>
Nov 24 10:03:00 compute-1 nova_compute[230010]:   <memory>131072</memory>
Nov 24 10:03:00 compute-1 nova_compute[230010]:   <vcpu>1</vcpu>
Nov 24 10:03:00 compute-1 nova_compute[230010]:   <metadata>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <nova:name>tempest-TestNetworkBasicOps-server-2027370088</nova:name>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <nova:creationTime>2025-11-24 10:02:59</nova:creationTime>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <nova:flavor name="m1.nano">
Nov 24 10:03:00 compute-1 nova_compute[230010]:         <nova:memory>128</nova:memory>
Nov 24 10:03:00 compute-1 nova_compute[230010]:         <nova:disk>1</nova:disk>
Nov 24 10:03:00 compute-1 nova_compute[230010]:         <nova:swap>0</nova:swap>
Nov 24 10:03:00 compute-1 nova_compute[230010]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 10:03:00 compute-1 nova_compute[230010]:         <nova:vcpus>1</nova:vcpus>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       </nova:flavor>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <nova:owner>
Nov 24 10:03:00 compute-1 nova_compute[230010]:         <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 10:03:00 compute-1 nova_compute[230010]:         <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       </nova:owner>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <nova:ports>
Nov 24 10:03:00 compute-1 nova_compute[230010]:         <nova:port uuid="99ae7646-7560-4043-bead-b1665083257c">
Nov 24 10:03:00 compute-1 nova_compute[230010]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:         </nova:port>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       </nova:ports>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     </nova:instance>
Nov 24 10:03:00 compute-1 nova_compute[230010]:   </metadata>
Nov 24 10:03:00 compute-1 nova_compute[230010]:   <sysinfo type="smbios">
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <system>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <entry name="manufacturer">RDO</entry>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <entry name="product">OpenStack Compute</entry>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <entry name="serial">16f34aac-788f-4079-9636-0db2c8de6422</entry>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <entry name="uuid">16f34aac-788f-4079-9636-0db2c8de6422</entry>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <entry name="family">Virtual Machine</entry>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     </system>
Nov 24 10:03:00 compute-1 nova_compute[230010]:   </sysinfo>
Nov 24 10:03:00 compute-1 nova_compute[230010]:   <os>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <boot dev="hd"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <smbios mode="sysinfo"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:   </os>
Nov 24 10:03:00 compute-1 nova_compute[230010]:   <features>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <acpi/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <apic/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <vmcoreinfo/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:   </features>
Nov 24 10:03:00 compute-1 nova_compute[230010]:   <clock offset="utc">
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <timer name="hpet" present="no"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:   </clock>
Nov 24 10:03:00 compute-1 nova_compute[230010]:   <cpu mode="host-model" match="exact">
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:   </cpu>
Nov 24 10:03:00 compute-1 nova_compute[230010]:   <devices>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <disk type="network" device="disk">
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <driver type="raw" cache="none"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <source protocol="rbd" name="vms/16f34aac-788f-4079-9636-0db2c8de6422_disk">
Nov 24 10:03:00 compute-1 nova_compute[230010]:         <host name="192.168.122.100" port="6789"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:         <host name="192.168.122.102" port="6789"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:         <host name="192.168.122.101" port="6789"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       </source>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <auth username="openstack">
Nov 24 10:03:00 compute-1 nova_compute[230010]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       </auth>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <target dev="vda" bus="virtio"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     </disk>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <disk type="network" device="cdrom">
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <driver type="raw" cache="none"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <source protocol="rbd" name="vms/16f34aac-788f-4079-9636-0db2c8de6422_disk.config">
Nov 24 10:03:00 compute-1 nova_compute[230010]:         <host name="192.168.122.100" port="6789"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:         <host name="192.168.122.102" port="6789"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:         <host name="192.168.122.101" port="6789"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       </source>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <auth username="openstack">
Nov 24 10:03:00 compute-1 nova_compute[230010]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       </auth>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <target dev="sda" bus="sata"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     </disk>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <interface type="ethernet">
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <mac address="fa:16:3e:30:f8:b9"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <model type="virtio"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <mtu size="1442"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <target dev="tap99ae7646-75"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     </interface>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <serial type="pty">
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <log file="/var/lib/nova/instances/16f34aac-788f-4079-9636-0db2c8de6422/console.log" append="off"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     </serial>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <video>
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <model type="virtio"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     </video>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <input type="tablet" bus="usb"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <rng model="virtio">
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <backend model="random">/dev/urandom</backend>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     </rng>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <controller type="usb" index="0"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     <memballoon model="virtio">
Nov 24 10:03:00 compute-1 nova_compute[230010]:       <stats period="10"/>
Nov 24 10:03:00 compute-1 nova_compute[230010]:     </memballoon>
Nov 24 10:03:00 compute-1 nova_compute[230010]:   </devices>
Nov 24 10:03:00 compute-1 nova_compute[230010]: </domain>
Nov 24 10:03:00 compute-1 nova_compute[230010]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.916 230014 DEBUG nova.compute.manager [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Preparing to wait for external event network-vif-plugged-99ae7646-7560-4043-bead-b1665083257c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.916 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "16f34aac-788f-4079-9636-0db2c8de6422-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.917 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.917 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.918 230014 DEBUG nova.virt.libvirt.vif [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T10:02:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2027370088',display_name='tempest-TestNetworkBasicOps-server-2027370088',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2027370088',id=12,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGm7yTWaQNPdMC8QFfBuLjQRH6ApcYu+qgaY7VksG3yV1HCE4jpliKx7D8r+sNe/kvB8dUvGyFVNy/wUcpBDiRvylUupCj2Y07y6yC0JXN3khCgh2GMBQWQ7Dhz5WIb2PQ==',key_name='tempest-TestNetworkBasicOps-2017683419',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-jq31848e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T10:02:53Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=16f34aac-788f-4079-9636-0db2c8de6422,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "99ae7646-7560-4043-bead-b1665083257c", "address": "fa:16:3e:30:f8:b9", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99ae7646-75", "ovs_interfaceid": "99ae7646-7560-4043-bead-b1665083257c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.918 230014 DEBUG nova.network.os_vif_util [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "99ae7646-7560-4043-bead-b1665083257c", "address": "fa:16:3e:30:f8:b9", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99ae7646-75", "ovs_interfaceid": "99ae7646-7560-4043-bead-b1665083257c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.919 230014 DEBUG nova.network.os_vif_util [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=99ae7646-7560-4043-bead-b1665083257c,network=Network(d9ce2622-5822-4ecf-9fb9-f5f15c8ea094),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99ae7646-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.919 230014 DEBUG os_vif [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=99ae7646-7560-4043-bead-b1665083257c,network=Network(d9ce2622-5822-4ecf-9fb9-f5f15c8ea094),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99ae7646-75') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.920 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.921 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.921 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.926 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.926 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99ae7646-75, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.926 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap99ae7646-75, col_values=(('external_ids', {'iface-id': '99ae7646-7560-4043-bead-b1665083257c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:f8:b9', 'vm-uuid': '16f34aac-788f-4079-9636-0db2c8de6422'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.964 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:00 compute-1 NetworkManager[48870]: <info>  [1763978580.9654] manager: (tap99ae7646-75): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.968 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.973 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:00 compute-1 nova_compute[230010]: 2025-11-24 10:03:00.976 230014 INFO os_vif [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=99ae7646-7560-4043-bead-b1665083257c,network=Network(d9ce2622-5822-4ecf-9fb9-f5f15c8ea094),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99ae7646-75')
Nov 24 10:03:01 compute-1 nova_compute[230010]: 2025-11-24 10:03:01.029 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 10:03:01 compute-1 nova_compute[230010]: 2025-11-24 10:03:01.029 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 10:03:01 compute-1 nova_compute[230010]: 2025-11-24 10:03:01.029 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:30:f8:b9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 10:03:01 compute-1 nova_compute[230010]: 2025-11-24 10:03:01.030 230014 INFO nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Using config drive
Nov 24 10:03:01 compute-1 nova_compute[230010]: 2025-11-24 10:03:01.060 230014 DEBUG nova.storage.rbd_utils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 16f34aac-788f-4079-9636-0db2c8de6422_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:03:01 compute-1 nova_compute[230010]: 2025-11-24 10:03:01.182 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:01.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3946666677' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:03:01 compute-1 nova_compute[230010]: 2025-11-24 10:03:01.509 230014 DEBUG nova.network.neutron [req-b1ce6229-6cf4-49b0-9571-225463bf2b16 req-26514d08-eb9d-4065-bb32-3fdfe8063604 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Updated VIF entry in instance network info cache for port 99ae7646-7560-4043-bead-b1665083257c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 10:03:01 compute-1 nova_compute[230010]: 2025-11-24 10:03:01.510 230014 DEBUG nova.network.neutron [req-b1ce6229-6cf4-49b0-9571-225463bf2b16 req-26514d08-eb9d-4065-bb32-3fdfe8063604 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Updating instance_info_cache with network_info: [{"id": "99ae7646-7560-4043-bead-b1665083257c", "address": "fa:16:3e:30:f8:b9", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99ae7646-75", "ovs_interfaceid": "99ae7646-7560-4043-bead-b1665083257c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:03:01 compute-1 nova_compute[230010]: 2025-11-24 10:03:01.518 230014 INFO nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Creating config drive at /var/lib/nova/instances/16f34aac-788f-4079-9636-0db2c8de6422/disk.config
Nov 24 10:03:01 compute-1 nova_compute[230010]: 2025-11-24 10:03:01.523 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/16f34aac-788f-4079-9636-0db2c8de6422/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnegmzrpt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:03:01 compute-1 nova_compute[230010]: 2025-11-24 10:03:01.542 230014 DEBUG oslo_concurrency.lockutils [req-b1ce6229-6cf4-49b0-9571-225463bf2b16 req-26514d08-eb9d-4065-bb32-3fdfe8063604 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-16f34aac-788f-4079-9636-0db2c8de6422" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:03:01 compute-1 nova_compute[230010]: 2025-11-24 10:03:01.649 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/16f34aac-788f-4079-9636-0db2c8de6422/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnegmzrpt" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:03:01 compute-1 nova_compute[230010]: 2025-11-24 10:03:01.677 230014 DEBUG nova.storage.rbd_utils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 16f34aac-788f-4079-9636-0db2c8de6422_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:03:01 compute-1 nova_compute[230010]: 2025-11-24 10:03:01.681 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/16f34aac-788f-4079-9636-0db2c8de6422/disk.config 16f34aac-788f-4079-9636-0db2c8de6422_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:03:01 compute-1 nova_compute[230010]: 2025-11-24 10:03:01.844 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/16f34aac-788f-4079-9636-0db2c8de6422/disk.config 16f34aac-788f-4079-9636-0db2c8de6422_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:03:01 compute-1 nova_compute[230010]: 2025-11-24 10:03:01.846 230014 INFO nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Deleting local config drive /var/lib/nova/instances/16f34aac-788f-4079-9636-0db2c8de6422/disk.config because it was imported into RBD.
Nov 24 10:03:01 compute-1 systemd[1]: Starting libvirt secret daemon...
Nov 24 10:03:01 compute-1 systemd[1]: Started libvirt secret daemon.
Nov 24 10:03:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 10:03:01 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/865843196' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:03:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 10:03:01 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/865843196' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:03:01 compute-1 kernel: tap99ae7646-75: entered promiscuous mode
Nov 24 10:03:01 compute-1 NetworkManager[48870]: <info>  [1763978581.9602] manager: (tap99ae7646-75): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Nov 24 10:03:01 compute-1 ovn_controller[132966]: 2025-11-24T10:03:01Z|00088|binding|INFO|Claiming lport 99ae7646-7560-4043-bead-b1665083257c for this chassis.
Nov 24 10:03:01 compute-1 ovn_controller[132966]: 2025-11-24T10:03:01Z|00089|binding|INFO|99ae7646-7560-4043-bead-b1665083257c: Claiming fa:16:3e:30:f8:b9 10.100.0.6
Nov 24 10:03:01 compute-1 nova_compute[230010]: 2025-11-24 10:03:01.961 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:01 compute-1 nova_compute[230010]: 2025-11-24 10:03:01.967 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:01 compute-1 nova_compute[230010]: 2025-11-24 10:03:01.972 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:01 compute-1 NetworkManager[48870]: <info>  [1763978581.9729] manager: (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Nov 24 10:03:01 compute-1 NetworkManager[48870]: <info>  [1763978581.9736] manager: (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Nov 24 10:03:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:01.978 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:f8:b9 10.100.0.6'], port_security=['fa:16:3e:30:f8:b9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '16f34aac-788f-4079-9636-0db2c8de6422', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0394b1e1-eb4e-4c88-8aad-cca296ee6f3d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5c42cc1-2181-41fb-bb98-22dec924e208, chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=99ae7646-7560-4043-bead-b1665083257c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 10:03:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:01.981 142336 INFO neutron.agent.ovn.metadata.agent [-] Port 99ae7646-7560-4043-bead-b1665083257c in datapath d9ce2622-5822-4ecf-9fb9-f5f15c8ea094 bound to our chassis
Nov 24 10:03:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:01.983 142336 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9ce2622-5822-4ecf-9fb9-f5f15c8ea094
Nov 24 10:03:01 compute-1 systemd-udevd[242756]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 10:03:01 compute-1 systemd-machined[193537]: New machine qemu-5-instance-0000000c.
Nov 24 10:03:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:01.998 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[44c82093-949f-43ab-beef-5c33852a5cec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:01.999 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd9ce2622-51 in ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.001 234803 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd9ce2622-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.001 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[6665be77-3dae-4cf1-a497-be601fee1fe6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.002 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[54bacb12-2c3e-4978-8fc6-f3a1deca6a96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.016 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[1bf8b391-528c-4db9-96b4-20fb2ca3c52e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:02 compute-1 NetworkManager[48870]: <info>  [1763978582.0199] device (tap99ae7646-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 10:03:02 compute-1 NetworkManager[48870]: <info>  [1763978582.0225] device (tap99ae7646-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 10:03:02 compute-1 systemd[1]: Started Virtual Machine qemu-5-instance-0000000c.
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.043 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[5768ebf8-6f7f-47a1-b83e-bff4dd321597]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:02 compute-1 ovn_controller[132966]: 2025-11-24T10:03:02Z|00090|binding|INFO|Setting lport 99ae7646-7560-4043-bead-b1665083257c up in Southbound
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.079 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[37bdd7a2-7684-4a40-bf27-39fb25df1858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:02 compute-1 NetworkManager[48870]: <info>  [1763978582.0929] manager: (tapd9ce2622-50): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Nov 24 10:03:02 compute-1 nova_compute[230010]: 2025-11-24 10:03:02.092 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:02 compute-1 systemd-udevd[242759]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.092 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[ec6a1acf-c2e0-40e4-ad25-3ee6a3ed7250]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:02 compute-1 ovn_controller[132966]: 2025-11-24T10:03:02Z|00091|binding|INFO|Setting lport 99ae7646-7560-4043-bead-b1665083257c ovn-installed in OVS
Nov 24 10:03:02 compute-1 nova_compute[230010]: 2025-11-24 10:03:02.099 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.124 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[8d591c01-ffc0-4c64-9738-2ec50c37a110]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.128 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[0a53dfef-1d68-4531-98bd-665314dd694e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:02 compute-1 NetworkManager[48870]: <info>  [1763978582.1538] device (tapd9ce2622-50): carrier: link connected
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.158 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[3d8bab8b-16b9-401b-a22a-d23eb322e1ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.185 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[ded259bd-4cbc-43a8-b907-de8bd1992357]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9ce2622-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:68:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452570, 'reachable_time': 33730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242788, 'error': None, 'target': 'ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.206 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[3d153ff4-27f2-443d-8cb2-cfa8f2689be4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe88:68d4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452570, 'tstamp': 452570}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242789, 'error': None, 'target': 'ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.226 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[7ac55f7e-3683-46a9-9975-8c60fc9a7a10]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9ce2622-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:68:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452570, 'reachable_time': 33730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 242790, 'error': None, 'target': 'ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:02 compute-1 nova_compute[230010]: 2025-11-24 10:03:02.247 230014 DEBUG nova.compute.manager [req-cbea8a1e-7000-4466-9ebd-a28c87be622b req-e66a502a-ce2f-43ef-bc72-53223711dafc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Received event network-vif-plugged-99ae7646-7560-4043-bead-b1665083257c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:03:02 compute-1 nova_compute[230010]: 2025-11-24 10:03:02.247 230014 DEBUG oslo_concurrency.lockutils [req-cbea8a1e-7000-4466-9ebd-a28c87be622b req-e66a502a-ce2f-43ef-bc72-53223711dafc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "16f34aac-788f-4079-9636-0db2c8de6422-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:02 compute-1 nova_compute[230010]: 2025-11-24 10:03:02.248 230014 DEBUG oslo_concurrency.lockutils [req-cbea8a1e-7000-4466-9ebd-a28c87be622b req-e66a502a-ce2f-43ef-bc72-53223711dafc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:02 compute-1 nova_compute[230010]: 2025-11-24 10:03:02.248 230014 DEBUG oslo_concurrency.lockutils [req-cbea8a1e-7000-4466-9ebd-a28c87be622b req-e66a502a-ce2f-43ef-bc72-53223711dafc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:02 compute-1 nova_compute[230010]: 2025-11-24 10:03:02.248 230014 DEBUG nova.compute.manager [req-cbea8a1e-7000-4466-9ebd-a28c87be622b req-e66a502a-ce2f-43ef-bc72-53223711dafc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Processing event network-vif-plugged-99ae7646-7560-4043-bead-b1665083257c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.269 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[12d1f13d-dbef-4cdc-87a4-a5b9d5a0ff78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.344 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[b2213430-6c00-45b2-af14-214ae994bf24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.346 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9ce2622-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.347 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.348 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9ce2622-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:03:02 compute-1 nova_compute[230010]: 2025-11-24 10:03:02.350 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:02 compute-1 NetworkManager[48870]: <info>  [1763978582.3509] manager: (tapd9ce2622-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Nov 24 10:03:02 compute-1 kernel: tapd9ce2622-50: entered promiscuous mode
Nov 24 10:03:02 compute-1 nova_compute[230010]: 2025-11-24 10:03:02.353 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.355 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9ce2622-50, col_values=(('external_ids', {'iface-id': '7ff70316-0c3c-4814-add9-f5919c7adc2b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:03:02 compute-1 nova_compute[230010]: 2025-11-24 10:03:02.356 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:02 compute-1 ovn_controller[132966]: 2025-11-24T10:03:02Z|00092|binding|INFO|Releasing lport 7ff70316-0c3c-4814-add9-f5919c7adc2b from this chassis (sb_readonly=0)
Nov 24 10:03:02 compute-1 nova_compute[230010]: 2025-11-24 10:03:02.369 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.372 142336 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d9ce2622-5822-4ecf-9fb9-f5f15c8ea094.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d9ce2622-5822-4ecf-9fb9-f5f15c8ea094.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.373 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[9b13016a-696b-49e8-b6ed-0ef0ac2fc912]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.375 142336 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: global
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]:     log         /dev/log local0 debug
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]:     log-tag     haproxy-metadata-proxy-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]:     user        root
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]:     group       root
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]:     maxconn     1024
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]:     pidfile     /var/lib/neutron/external/pids/d9ce2622-5822-4ecf-9fb9-f5f15c8ea094.pid.haproxy
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]:     daemon
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: defaults
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]:     log global
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]:     mode http
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]:     option httplog
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]:     option dontlognull
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]:     option http-server-close
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]:     option forwardfor
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]:     retries                 3
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]:     timeout http-request    30s
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]:     timeout connect         30s
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]:     timeout client          32s
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]:     timeout server          32s
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]:     timeout http-keep-alive 30s
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: listen listener
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]:     bind 169.254.169.254:80
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]:     server metadata /var/lib/neutron/metadata_proxy
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]:     http-request add-header X-OVN-Network-ID d9ce2622-5822-4ecf-9fb9-f5f15c8ea094
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 24 10:03:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.376 142336 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094', 'env', 'PROCESS_TAG=haproxy-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d9ce2622-5822-4ecf-9fb9-f5f15c8ea094.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 24 10:03:02 compute-1 ceph-mon[80009]: pgmap v1088: 353 pgs: 353 active+clean; 134 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 95 op/s
Nov 24 10:03:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/865843196' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:03:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/865843196' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:03:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:02.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:02 compute-1 podman[242822]: 2025-11-24 10:03:02.776627853 +0000 UTC m=+0.053649435 container create 2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 10:03:02 compute-1 systemd[1]: Started libpod-conmon-2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1.scope.
Nov 24 10:03:02 compute-1 podman[242822]: 2025-11-24 10:03:02.748071974 +0000 UTC m=+0.025093576 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 10:03:02 compute-1 systemd[1]: Started libcrun container.
Nov 24 10:03:02 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2695d8c7141ff441362d396fc0649dcdddbdcd12afc2cf9c7f37256879ce4706/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 10:03:02 compute-1 podman[242822]: 2025-11-24 10:03:02.879651566 +0000 UTC m=+0.156673178 container init 2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 24 10:03:02 compute-1 podman[242822]: 2025-11-24 10:03:02.884631537 +0000 UTC m=+0.161653109 container start 2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 10:03:02 compute-1 neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094[242838]: [NOTICE]   (242842) : New worker (242844) forked
Nov 24 10:03:02 compute-1 neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094[242838]: [NOTICE]   (242842) : Loading success.
Nov 24 10:03:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:03.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.476 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978583.4758582, 16f34aac-788f-4079-9636-0db2c8de6422 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.477 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] VM Started (Lifecycle Event)
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.479 230014 DEBUG nova.compute.manager [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.483 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.486 230014 INFO nova.virt.libvirt.driver [-] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Instance spawned successfully.
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.486 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.501 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.504 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.512 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.512 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.513 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.513 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.514 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.514 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.521 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.521 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978583.476001, 16f34aac-788f-4079-9636-0db2c8de6422 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.521 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] VM Paused (Lifecycle Event)
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.547 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.552 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978583.4820945, 16f34aac-788f-4079-9636-0db2c8de6422 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.552 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] VM Resumed (Lifecycle Event)
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.570 230014 INFO nova.compute.manager [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Took 9.69 seconds to spawn the instance on the hypervisor.
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.571 230014 DEBUG nova.compute.manager [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.573 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.581 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.608 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.643 230014 INFO nova.compute.manager [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Took 10.70 seconds to build instance.
Nov 24 10:03:03 compute-1 nova_compute[230010]: 2025-11-24 10:03:03.655 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:04 compute-1 nova_compute[230010]: 2025-11-24 10:03:04.323 230014 DEBUG nova.compute.manager [req-bb6482ee-29f4-40e0-878d-38f243a0e601 req-824ed1b5-4814-4d36-a892-91200dc0b833 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Received event network-vif-plugged-99ae7646-7560-4043-bead-b1665083257c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:03:04 compute-1 nova_compute[230010]: 2025-11-24 10:03:04.323 230014 DEBUG oslo_concurrency.lockutils [req-bb6482ee-29f4-40e0-878d-38f243a0e601 req-824ed1b5-4814-4d36-a892-91200dc0b833 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "16f34aac-788f-4079-9636-0db2c8de6422-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:04 compute-1 nova_compute[230010]: 2025-11-24 10:03:04.323 230014 DEBUG oslo_concurrency.lockutils [req-bb6482ee-29f4-40e0-878d-38f243a0e601 req-824ed1b5-4814-4d36-a892-91200dc0b833 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:04 compute-1 nova_compute[230010]: 2025-11-24 10:03:04.323 230014 DEBUG oslo_concurrency.lockutils [req-bb6482ee-29f4-40e0-878d-38f243a0e601 req-824ed1b5-4814-4d36-a892-91200dc0b833 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:04 compute-1 nova_compute[230010]: 2025-11-24 10:03:04.324 230014 DEBUG nova.compute.manager [req-bb6482ee-29f4-40e0-878d-38f243a0e601 req-824ed1b5-4814-4d36-a892-91200dc0b833 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] No waiting events found dispatching network-vif-plugged-99ae7646-7560-4043-bead-b1665083257c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:03:04 compute-1 nova_compute[230010]: 2025-11-24 10:03:04.324 230014 WARNING nova.compute.manager [req-bb6482ee-29f4-40e0-878d-38f243a0e601 req-824ed1b5-4814-4d36-a892-91200dc0b833 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Received unexpected event network-vif-plugged-99ae7646-7560-4043-bead-b1665083257c for instance with vm_state active and task_state None.
Nov 24 10:03:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:03:04 compute-1 ceph-mon[80009]: pgmap v1089: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 369 KiB/s rd, 4.2 MiB/s wr, 98 op/s
Nov 24 10:03:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:04.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:05.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:05 compute-1 nova_compute[230010]: 2025-11-24 10:03:05.966 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:06 compute-1 nova_compute[230010]: 2025-11-24 10:03:06.184 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:06 compute-1 ceph-mon[80009]: pgmap v1090: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Nov 24 10:03:06 compute-1 nova_compute[230010]: 2025-11-24 10:03:06.680 230014 DEBUG nova.compute.manager [req-872716c2-50d9-4372-b648-6f0eae46be06 req-aad0e74e-3cca-41e8-8691-b5b1e111740c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Received event network-changed-99ae7646-7560-4043-bead-b1665083257c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:03:06 compute-1 nova_compute[230010]: 2025-11-24 10:03:06.681 230014 DEBUG nova.compute.manager [req-872716c2-50d9-4372-b648-6f0eae46be06 req-aad0e74e-3cca-41e8-8691-b5b1e111740c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Refreshing instance network info cache due to event network-changed-99ae7646-7560-4043-bead-b1665083257c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 10:03:06 compute-1 nova_compute[230010]: 2025-11-24 10:03:06.681 230014 DEBUG oslo_concurrency.lockutils [req-872716c2-50d9-4372-b648-6f0eae46be06 req-aad0e74e-3cca-41e8-8691-b5b1e111740c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-16f34aac-788f-4079-9636-0db2c8de6422" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:03:06 compute-1 nova_compute[230010]: 2025-11-24 10:03:06.681 230014 DEBUG oslo_concurrency.lockutils [req-872716c2-50d9-4372-b648-6f0eae46be06 req-aad0e74e-3cca-41e8-8691-b5b1e111740c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-16f34aac-788f-4079-9636-0db2c8de6422" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:03:06 compute-1 nova_compute[230010]: 2025-11-24 10:03:06.681 230014 DEBUG nova.network.neutron [req-872716c2-50d9-4372-b648-6f0eae46be06 req-aad0e74e-3cca-41e8-8691-b5b1e111740c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Refreshing network info cache for port 99ae7646-7560-4043-bead-b1665083257c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 10:03:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:03:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:06.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:03:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:03:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:07.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:03:07 compute-1 nova_compute[230010]: 2025-11-24 10:03:07.727 230014 DEBUG nova.network.neutron [req-872716c2-50d9-4372-b648-6f0eae46be06 req-aad0e74e-3cca-41e8-8691-b5b1e111740c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Updated VIF entry in instance network info cache for port 99ae7646-7560-4043-bead-b1665083257c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 10:03:07 compute-1 nova_compute[230010]: 2025-11-24 10:03:07.728 230014 DEBUG nova.network.neutron [req-872716c2-50d9-4372-b648-6f0eae46be06 req-aad0e74e-3cca-41e8-8691-b5b1e111740c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Updating instance_info_cache with network_info: [{"id": "99ae7646-7560-4043-bead-b1665083257c", "address": "fa:16:3e:30:f8:b9", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99ae7646-75", "ovs_interfaceid": "99ae7646-7560-4043-bead-b1665083257c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:03:07 compute-1 nova_compute[230010]: 2025-11-24 10:03:07.752 230014 DEBUG oslo_concurrency.lockutils [req-872716c2-50d9-4372-b648-6f0eae46be06 req-aad0e74e-3cca-41e8-8691-b5b1e111740c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-16f34aac-788f-4079-9636-0db2c8de6422" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:03:08 compute-1 sudo[242898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:03:08 compute-1 sudo[242898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:03:08 compute-1 sudo[242898]: pam_unix(sudo:session): session closed for user root
Nov 24 10:03:08 compute-1 ceph-mon[80009]: pgmap v1091: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Nov 24 10:03:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:08.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:09.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:03:09 compute-1 ceph-mon[80009]: pgmap v1092: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Nov 24 10:03:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:10.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:11 compute-1 nova_compute[230010]: 2025-11-24 10:03:11.001 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:11 compute-1 nova_compute[230010]: 2025-11-24 10:03:11.190 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:11.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:12 compute-1 ceph-mon[80009]: pgmap v1093: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Nov 24 10:03:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:12.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:13.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:13 compute-1 podman[242926]: 2025-11-24 10:03:13.34649471 +0000 UTC m=+0.074519196 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 24 10:03:14 compute-1 ceph-mon[80009]: pgmap v1094: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 139 op/s
Nov 24 10:03:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:03:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:14.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:15.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:03:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:03:16 compute-1 nova_compute[230010]: 2025-11-24 10:03:16.004 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:16 compute-1 ceph-mon[80009]: pgmap v1095: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 75 op/s
Nov 24 10:03:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:03:16 compute-1 nova_compute[230010]: 2025-11-24 10:03:16.196 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:16 compute-1 ovn_controller[132966]: 2025-11-24T10:03:16Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:30:f8:b9 10.100.0.6
Nov 24 10:03:16 compute-1 ovn_controller[132966]: 2025-11-24T10:03:16Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:30:f8:b9 10.100.0.6
Nov 24 10:03:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:16.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:17.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:17 compute-1 nova_compute[230010]: 2025-11-24 10:03:17.775 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:03:18 compute-1 ceph-mon[80009]: pgmap v1096: 353 pgs: 353 active+clean; 188 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Nov 24 10:03:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:18.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:18 compute-1 nova_compute[230010]: 2025-11-24 10:03:18.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:03:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:19.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:19 compute-1 podman[242950]: 2025-11-24 10:03:19.429205396 +0000 UTC m=+0.162987843 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 10:03:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:03:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:20.066 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:20.067 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:20.067 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:20 compute-1 ceph-mon[80009]: pgmap v1097: 353 pgs: 353 active+clean; 188 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 237 KiB/s rd, 2.0 MiB/s wr, 44 op/s
Nov 24 10:03:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:20.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:20 compute-1 nova_compute[230010]: 2025-11-24 10:03:20.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:03:20 compute-1 nova_compute[230010]: 2025-11-24 10:03:20.766 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:03:21 compute-1 nova_compute[230010]: 2025-11-24 10:03:21.009 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:21 compute-1 nova_compute[230010]: 2025-11-24 10:03:21.200 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:21.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:22 compute-1 ceph-mon[80009]: pgmap v1098: 353 pgs: 353 active+clean; 188 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 237 KiB/s rd, 2.0 MiB/s wr, 44 op/s
Nov 24 10:03:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:22.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:22 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:22.898 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 10:03:22 compute-1 nova_compute[230010]: 2025-11-24 10:03:22.899 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:22 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:22.900 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 10:03:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:23.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:23 compute-1 nova_compute[230010]: 2025-11-24 10:03:23.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:03:24 compute-1 ceph-mon[80009]: pgmap v1099: 353 pgs: 353 active+clean; 200 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 24 10:03:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:03:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:24.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:24 compute-1 nova_compute[230010]: 2025-11-24 10:03:24.760 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:03:24 compute-1 nova_compute[230010]: 2025-11-24 10:03:24.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:03:24 compute-1 nova_compute[230010]: 2025-11-24 10:03:24.790 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:24 compute-1 nova_compute[230010]: 2025-11-24 10:03:24.791 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:24 compute-1 nova_compute[230010]: 2025-11-24 10:03:24.791 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:24 compute-1 nova_compute[230010]: 2025-11-24 10:03:24.791 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:03:24 compute-1 nova_compute[230010]: 2025-11-24 10:03:24.792 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:03:25 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2656099280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:03:25 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:03:25 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1881790664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:03:25 compute-1 nova_compute[230010]: 2025-11-24 10:03:25.225 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:03:25 compute-1 nova_compute[230010]: 2025-11-24 10:03:25.289 230014 DEBUG nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 10:03:25 compute-1 nova_compute[230010]: 2025-11-24 10:03:25.290 230014 DEBUG nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 10:03:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:25.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:25 compute-1 nova_compute[230010]: 2025-11-24 10:03:25.438 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:03:25 compute-1 nova_compute[230010]: 2025-11-24 10:03:25.439 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4767MB free_disk=59.89735412597656GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:03:25 compute-1 nova_compute[230010]: 2025-11-24 10:03:25.439 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:25 compute-1 nova_compute[230010]: 2025-11-24 10:03:25.440 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:25 compute-1 nova_compute[230010]: 2025-11-24 10:03:25.527 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Instance 16f34aac-788f-4079-9636-0db2c8de6422 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 10:03:25 compute-1 nova_compute[230010]: 2025-11-24 10:03:25.527 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:03:25 compute-1 nova_compute[230010]: 2025-11-24 10:03:25.527 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:03:25 compute-1 nova_compute[230010]: 2025-11-24 10:03:25.568 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:03:25 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:03:25 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1287250146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:03:26 compute-1 nova_compute[230010]: 2025-11-24 10:03:26.009 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:03:26 compute-1 nova_compute[230010]: 2025-11-24 10:03:26.012 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:26 compute-1 nova_compute[230010]: 2025-11-24 10:03:26.016 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:03:26 compute-1 nova_compute[230010]: 2025-11-24 10:03:26.037 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:03:26 compute-1 nova_compute[230010]: 2025-11-24 10:03:26.058 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:03:26 compute-1 nova_compute[230010]: 2025-11-24 10:03:26.058 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:26 compute-1 ceph-mon[80009]: pgmap v1100: 353 pgs: 353 active+clean; 200 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 24 10:03:26 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1881790664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:03:26 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1244677551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:03:26 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1287250146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:03:26 compute-1 nova_compute[230010]: 2025-11-24 10:03:26.201 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:26.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.068 230014 DEBUG oslo_concurrency.lockutils [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "16f34aac-788f-4079-9636-0db2c8de6422" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.068 230014 DEBUG oslo_concurrency.lockutils [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.069 230014 DEBUG oslo_concurrency.lockutils [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "16f34aac-788f-4079-9636-0db2c8de6422-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.069 230014 DEBUG oslo_concurrency.lockutils [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.069 230014 DEBUG oslo_concurrency.lockutils [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.070 230014 INFO nova.compute.manager [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Terminating instance
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.071 230014 DEBUG nova.compute.manager [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 10:03:27 compute-1 kernel: tap99ae7646-75 (unregistering): left promiscuous mode
Nov 24 10:03:27 compute-1 NetworkManager[48870]: <info>  [1763978607.1266] device (tap99ae7646-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 10:03:27 compute-1 ovn_controller[132966]: 2025-11-24T10:03:27Z|00093|binding|INFO|Releasing lport 99ae7646-7560-4043-bead-b1665083257c from this chassis (sb_readonly=0)
Nov 24 10:03:27 compute-1 ovn_controller[132966]: 2025-11-24T10:03:27Z|00094|binding|INFO|Setting lport 99ae7646-7560-4043-bead-b1665083257c down in Southbound
Nov 24 10:03:27 compute-1 ovn_controller[132966]: 2025-11-24T10:03:27Z|00095|binding|INFO|Removing iface tap99ae7646-75 ovn-installed in OVS
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.135 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:27 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.144 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:f8:b9 10.100.0.6'], port_security=['fa:16:3e:30:f8:b9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '16f34aac-788f-4079-9636-0db2c8de6422', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0394b1e1-eb4e-4c88-8aad-cca296ee6f3d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5c42cc1-2181-41fb-bb98-22dec924e208, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=99ae7646-7560-4043-bead-b1665083257c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 10:03:27 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.145 142336 INFO neutron.agent.ovn.metadata.agent [-] Port 99ae7646-7560-4043-bead-b1665083257c in datapath d9ce2622-5822-4ecf-9fb9-f5f15c8ea094 unbound from our chassis
Nov 24 10:03:27 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.146 142336 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d9ce2622-5822-4ecf-9fb9-f5f15c8ea094, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 10:03:27 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.148 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[25420972-6e1d-4d5a-a0f9-d24d141558d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:27 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.149 142336 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094 namespace which is not needed anymore
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.154 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:27 compute-1 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 24 10:03:27 compute-1 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000c.scope: Consumed 15.782s CPU time.
Nov 24 10:03:27 compute-1 systemd-machined[193537]: Machine qemu-5-instance-0000000c terminated.
Nov 24 10:03:27 compute-1 podman[243025]: 2025-11-24 10:03:27.20514978 +0000 UTC m=+0.053933511 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 24 10:03:27 compute-1 neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094[242838]: [NOTICE]   (242842) : haproxy version is 2.8.14-c23fe91
Nov 24 10:03:27 compute-1 neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094[242838]: [NOTICE]   (242842) : path to executable is /usr/sbin/haproxy
Nov 24 10:03:27 compute-1 neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094[242838]: [WARNING]  (242842) : Exiting Master process...
Nov 24 10:03:27 compute-1 neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094[242838]: [ALERT]    (242842) : Current worker (242844) exited with code 143 (Terminated)
Nov 24 10:03:27 compute-1 neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094[242838]: [WARNING]  (242842) : All workers exited. Exiting... (0)
Nov 24 10:03:27 compute-1 systemd[1]: libpod-2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1.scope: Deactivated successfully.
Nov 24 10:03:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:27.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.324 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:27 compute-1 podman[243067]: 2025-11-24 10:03:27.329960997 +0000 UTC m=+0.086154681 container died 2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.338 230014 INFO nova.virt.libvirt.driver [-] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Instance destroyed successfully.
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.339 230014 DEBUG nova.objects.instance [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'resources' on Instance uuid 16f34aac-788f-4079-9636-0db2c8de6422 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.348 230014 DEBUG nova.virt.libvirt.vif [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T10:02:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2027370088',display_name='tempest-TestNetworkBasicOps-server-2027370088',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2027370088',id=12,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGm7yTWaQNPdMC8QFfBuLjQRH6ApcYu+qgaY7VksG3yV1HCE4jpliKx7D8r+sNe/kvB8dUvGyFVNy/wUcpBDiRvylUupCj2Y07y6yC0JXN3khCgh2GMBQWQ7Dhz5WIb2PQ==',key_name='tempest-TestNetworkBasicOps-2017683419',keypairs=<?>,launch_index=0,launched_at=2025-11-24T10:03:03Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-jq31848e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T10:03:03Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=16f34aac-788f-4079-9636-0db2c8de6422,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "99ae7646-7560-4043-bead-b1665083257c", "address": "fa:16:3e:30:f8:b9", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99ae7646-75", "ovs_interfaceid": "99ae7646-7560-4043-bead-b1665083257c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.349 230014 DEBUG nova.network.os_vif_util [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "99ae7646-7560-4043-bead-b1665083257c", "address": "fa:16:3e:30:f8:b9", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99ae7646-75", "ovs_interfaceid": "99ae7646-7560-4043-bead-b1665083257c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.349 230014 DEBUG nova.network.os_vif_util [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=99ae7646-7560-4043-bead-b1665083257c,network=Network(d9ce2622-5822-4ecf-9fb9-f5f15c8ea094),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99ae7646-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.350 230014 DEBUG os_vif [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=99ae7646-7560-4043-bead-b1665083257c,network=Network(d9ce2622-5822-4ecf-9fb9-f5f15c8ea094),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99ae7646-75') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.351 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.352 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99ae7646-75, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.353 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.355 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.357 230014 INFO os_vif [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=99ae7646-7560-4043-bead-b1665083257c,network=Network(d9ce2622-5822-4ecf-9fb9-f5f15c8ea094),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99ae7646-75')
Nov 24 10:03:27 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1-userdata-shm.mount: Deactivated successfully.
Nov 24 10:03:27 compute-1 systemd[1]: var-lib-containers-storage-overlay-2695d8c7141ff441362d396fc0649dcdddbdcd12afc2cf9c7f37256879ce4706-merged.mount: Deactivated successfully.
Nov 24 10:03:27 compute-1 podman[243067]: 2025-11-24 10:03:27.386543932 +0000 UTC m=+0.142737636 container cleanup 2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 10:03:27 compute-1 systemd[1]: libpod-conmon-2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1.scope: Deactivated successfully.
Nov 24 10:03:27 compute-1 podman[243122]: 2025-11-24 10:03:27.480867291 +0000 UTC m=+0.074148677 container remove 2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 10:03:27 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.491 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[3eca8511-8d76-4a09-9eb7-ef4cee05a633]: (4, ('Mon Nov 24 10:03:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094 (2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1)\n2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1\nMon Nov 24 10:03:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094 (2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1)\n2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:27 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.493 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[b0111ab3-2100-4e50-86cd-58d5c697d17d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:27 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.494 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9ce2622-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:03:27 compute-1 kernel: tapd9ce2622-50: left promiscuous mode
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.502 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.510 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:27 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.515 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a9828e-3856-414a-80da-2a2293330eb1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:27 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.535 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[87174bf5-59e5-4fe3-bf6b-d6df3fe511c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:27 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.537 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[e7ae3db0-d7db-4fa7-b75a-0cb0a35b17a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:27 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.558 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[c574d512-a645-4194-9b43-65751a3f7371]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452562, 'reachable_time': 44442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243139, 'error': None, 'target': 'ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:27 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.563 142476 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 24 10:03:27 compute-1 systemd[1]: run-netns-ovnmeta\x2dd9ce2622\x2d5822\x2d4ecf\x2d9fb9\x2df5f15c8ea094.mount: Deactivated successfully.
Nov 24 10:03:27 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.563 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[94fa03aa-a089-4ae5-bfb5-62ad6dcf86e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.878 230014 DEBUG nova.compute.manager [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Received event network-changed-99ae7646-7560-4043-bead-b1665083257c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.879 230014 DEBUG nova.compute.manager [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Refreshing instance network info cache due to event network-changed-99ae7646-7560-4043-bead-b1665083257c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.879 230014 DEBUG oslo_concurrency.lockutils [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-16f34aac-788f-4079-9636-0db2c8de6422" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.880 230014 DEBUG oslo_concurrency.lockutils [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-16f34aac-788f-4079-9636-0db2c8de6422" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.880 230014 DEBUG nova.network.neutron [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Refreshing network info cache for port 99ae7646-7560-4043-bead-b1665083257c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 10:03:27 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.902 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.914 230014 INFO nova.virt.libvirt.driver [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Deleting instance files /var/lib/nova/instances/16f34aac-788f-4079-9636-0db2c8de6422_del
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.915 230014 INFO nova.virt.libvirt.driver [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Deletion of /var/lib/nova/instances/16f34aac-788f-4079-9636-0db2c8de6422_del complete
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.966 230014 INFO nova.compute.manager [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Took 0.90 seconds to destroy the instance on the hypervisor.
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.967 230014 DEBUG oslo.service.loopingcall [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.967 230014 DEBUG nova.compute.manager [-] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 10:03:27 compute-1 nova_compute[230010]: 2025-11-24 10:03:27.968 230014 DEBUG nova.network.neutron [-] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 10:03:28 compute-1 nova_compute[230010]: 2025-11-24 10:03:28.055 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:03:28 compute-1 nova_compute[230010]: 2025-11-24 10:03:28.076 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:03:28 compute-1 nova_compute[230010]: 2025-11-24 10:03:28.076 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:03:28 compute-1 nova_compute[230010]: 2025-11-24 10:03:28.077 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:03:28 compute-1 nova_compute[230010]: 2025-11-24 10:03:28.089 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 24 10:03:28 compute-1 nova_compute[230010]: 2025-11-24 10:03:28.090 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:03:28 compute-1 nova_compute[230010]: 2025-11-24 10:03:28.090 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:03:28 compute-1 nova_compute[230010]: 2025-11-24 10:03:28.090 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:03:28 compute-1 ceph-mon[80009]: pgmap v1101: 353 pgs: 353 active+clean; 200 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 398 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Nov 24 10:03:28 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1192466239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:03:28 compute-1 sudo[243142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:03:28 compute-1 sudo[243142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:03:28 compute-1 sudo[243142]: pam_unix(sudo:session): session closed for user root
Nov 24 10:03:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:03:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:28.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:03:28 compute-1 nova_compute[230010]: 2025-11-24 10:03:28.865 230014 DEBUG nova.network.neutron [-] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:03:28 compute-1 nova_compute[230010]: 2025-11-24 10:03:28.881 230014 INFO nova.compute.manager [-] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Took 0.91 seconds to deallocate network for instance.
Nov 24 10:03:28 compute-1 nova_compute[230010]: 2025-11-24 10:03:28.925 230014 DEBUG oslo_concurrency.lockutils [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:28 compute-1 nova_compute[230010]: 2025-11-24 10:03:28.926 230014 DEBUG oslo_concurrency.lockutils [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:28 compute-1 nova_compute[230010]: 2025-11-24 10:03:28.981 230014 DEBUG oslo_concurrency.processutils [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:03:29 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/4215531718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:03:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:29.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:03:29 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/270683702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:03:29 compute-1 nova_compute[230010]: 2025-11-24 10:03:29.462 230014 DEBUG oslo_concurrency.processutils [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:03:29 compute-1 nova_compute[230010]: 2025-11-24 10:03:29.470 230014 DEBUG nova.compute.provider_tree [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:03:29 compute-1 nova_compute[230010]: 2025-11-24 10:03:29.483 230014 DEBUG nova.scheduler.client.report [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:03:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:03:29 compute-1 nova_compute[230010]: 2025-11-24 10:03:29.508 230014 DEBUG oslo_concurrency.lockutils [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:29 compute-1 nova_compute[230010]: 2025-11-24 10:03:29.535 230014 INFO nova.scheduler.client.report [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Deleted allocations for instance 16f34aac-788f-4079-9636-0db2c8de6422
Nov 24 10:03:29 compute-1 nova_compute[230010]: 2025-11-24 10:03:29.557 230014 DEBUG nova.network.neutron [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Updated VIF entry in instance network info cache for port 99ae7646-7560-4043-bead-b1665083257c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 10:03:29 compute-1 nova_compute[230010]: 2025-11-24 10:03:29.558 230014 DEBUG nova.network.neutron [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Updating instance_info_cache with network_info: [{"id": "99ae7646-7560-4043-bead-b1665083257c", "address": "fa:16:3e:30:f8:b9", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99ae7646-75", "ovs_interfaceid": "99ae7646-7560-4043-bead-b1665083257c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:03:29 compute-1 nova_compute[230010]: 2025-11-24 10:03:29.594 230014 DEBUG oslo_concurrency.lockutils [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-16f34aac-788f-4079-9636-0db2c8de6422" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:03:29 compute-1 nova_compute[230010]: 2025-11-24 10:03:29.595 230014 DEBUG nova.compute.manager [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Received event network-vif-unplugged-99ae7646-7560-4043-bead-b1665083257c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:03:29 compute-1 nova_compute[230010]: 2025-11-24 10:03:29.596 230014 DEBUG oslo_concurrency.lockutils [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "16f34aac-788f-4079-9636-0db2c8de6422-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:29 compute-1 nova_compute[230010]: 2025-11-24 10:03:29.596 230014 DEBUG oslo_concurrency.lockutils [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:29 compute-1 nova_compute[230010]: 2025-11-24 10:03:29.597 230014 DEBUG oslo_concurrency.lockutils [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:29 compute-1 nova_compute[230010]: 2025-11-24 10:03:29.597 230014 DEBUG nova.compute.manager [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] No waiting events found dispatching network-vif-unplugged-99ae7646-7560-4043-bead-b1665083257c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:03:29 compute-1 nova_compute[230010]: 2025-11-24 10:03:29.598 230014 DEBUG nova.compute.manager [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Received event network-vif-unplugged-99ae7646-7560-4043-bead-b1665083257c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 24 10:03:29 compute-1 nova_compute[230010]: 2025-11-24 10:03:29.598 230014 DEBUG nova.compute.manager [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Received event network-vif-plugged-99ae7646-7560-4043-bead-b1665083257c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:03:29 compute-1 nova_compute[230010]: 2025-11-24 10:03:29.599 230014 DEBUG oslo_concurrency.lockutils [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "16f34aac-788f-4079-9636-0db2c8de6422-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:29 compute-1 nova_compute[230010]: 2025-11-24 10:03:29.599 230014 DEBUG oslo_concurrency.lockutils [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:29 compute-1 nova_compute[230010]: 2025-11-24 10:03:29.599 230014 DEBUG oslo_concurrency.lockutils [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:29 compute-1 nova_compute[230010]: 2025-11-24 10:03:29.600 230014 DEBUG nova.compute.manager [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] No waiting events found dispatching network-vif-plugged-99ae7646-7560-4043-bead-b1665083257c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:03:29 compute-1 nova_compute[230010]: 2025-11-24 10:03:29.600 230014 WARNING nova.compute.manager [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Received unexpected event network-vif-plugged-99ae7646-7560-4043-bead-b1665083257c for instance with vm_state active and task_state deleting.
Nov 24 10:03:29 compute-1 nova_compute[230010]: 2025-11-24 10:03:29.623 230014 DEBUG oslo_concurrency.lockutils [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:29 compute-1 nova_compute[230010]: 2025-11-24 10:03:29.966 230014 DEBUG nova.compute.manager [req-5e0ce72e-e6b6-4e2d-8dba-79de39ba2163 req-9b7a96dc-0c72-4e0c-a6f8-542b6513d36f 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Received event network-vif-deleted-99ae7646-7560-4043-bead-b1665083257c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:03:29 compute-1 nova_compute[230010]: 2025-11-24 10:03:29.967 230014 INFO nova.compute.manager [req-5e0ce72e-e6b6-4e2d-8dba-79de39ba2163 req-9b7a96dc-0c72-4e0c-a6f8-542b6513d36f 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Neutron deleted interface 99ae7646-7560-4043-bead-b1665083257c; detaching it from the instance and deleting it from the info cache
Nov 24 10:03:29 compute-1 nova_compute[230010]: 2025-11-24 10:03:29.967 230014 DEBUG nova.network.neutron [req-5e0ce72e-e6b6-4e2d-8dba-79de39ba2163 req-9b7a96dc-0c72-4e0c-a6f8-542b6513d36f 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Nov 24 10:03:29 compute-1 nova_compute[230010]: 2025-11-24 10:03:29.969 230014 DEBUG nova.compute.manager [req-5e0ce72e-e6b6-4e2d-8dba-79de39ba2163 req-9b7a96dc-0c72-4e0c-a6f8-542b6513d36f 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Detach interface failed, port_id=99ae7646-7560-4043-bead-b1665083257c, reason: Instance 16f34aac-788f-4079-9636-0db2c8de6422 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 24 10:03:30 compute-1 ceph-mon[80009]: pgmap v1102: 353 pgs: 353 active+clean; 200 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 161 KiB/s rd, 109 KiB/s wr, 24 op/s
Nov 24 10:03:30 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/270683702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:03:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:03:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:03:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:30.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:31 compute-1 nova_compute[230010]: 2025-11-24 10:03:31.204 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:03:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:31.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:32 compute-1 ceph-mon[80009]: pgmap v1103: 353 pgs: 353 active+clean; 200 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 161 KiB/s rd, 109 KiB/s wr, 24 op/s
Nov 24 10:03:32 compute-1 nova_compute[230010]: 2025-11-24 10:03:32.355 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:03:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:32.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:03:33 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/946716586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:03:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:33.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:34 compute-1 ceph-mon[80009]: pgmap v1104: 353 pgs: 353 active+clean; 121 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 180 KiB/s rd, 114 KiB/s wr, 53 op/s
Nov 24 10:03:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:03:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:34.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:35.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:36 compute-1 nova_compute[230010]: 2025-11-24 10:03:36.207 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:36 compute-1 ceph-mon[80009]: pgmap v1105: 353 pgs: 353 active+clean; 121 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 18 KiB/s wr, 30 op/s
Nov 24 10:03:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:36.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:37.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:37 compute-1 nova_compute[230010]: 2025-11-24 10:03:37.358 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:37 compute-1 nova_compute[230010]: 2025-11-24 10:03:37.914 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:38 compute-1 nova_compute[230010]: 2025-11-24 10:03:38.030 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:38 compute-1 ceph-mon[80009]: pgmap v1106: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 19 KiB/s wr, 58 op/s
Nov 24 10:03:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:38.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:39.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:03:40 compute-1 ceph-mon[80009]: pgmap v1107: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.7 KiB/s wr, 57 op/s
Nov 24 10:03:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:40.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:41 compute-1 nova_compute[230010]: 2025-11-24 10:03:41.209 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:41.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:42 compute-1 nova_compute[230010]: 2025-11-24 10:03:42.338 230014 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763978607.3363385, 16f34aac-788f-4079-9636-0db2c8de6422 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:03:42 compute-1 nova_compute[230010]: 2025-11-24 10:03:42.338 230014 INFO nova.compute.manager [-] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] VM Stopped (Lifecycle Event)
Nov 24 10:03:42 compute-1 nova_compute[230010]: 2025-11-24 10:03:42.360 230014 DEBUG nova.compute.manager [None req-84138f20-4e7d-4a51-9173-78eb5dc57c28 - - - - - -] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:03:42 compute-1 nova_compute[230010]: 2025-11-24 10:03:42.361 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:42 compute-1 ceph-mon[80009]: pgmap v1108: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.7 KiB/s wr, 57 op/s
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.405230) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978622405297, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2367, "num_deletes": 251, "total_data_size": 6277458, "memory_usage": 6371104, "flush_reason": "Manual Compaction"}
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978622424620, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 4047955, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31290, "largest_seqno": 33652, "table_properties": {"data_size": 4038373, "index_size": 6012, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20157, "raw_average_key_size": 20, "raw_value_size": 4019152, "raw_average_value_size": 4088, "num_data_blocks": 258, "num_entries": 983, "num_filter_entries": 983, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763978418, "oldest_key_time": 1763978418, "file_creation_time": 1763978622, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 19418 microseconds, and 7829 cpu microseconds.
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.424660) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 4047955 bytes OK
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.424682) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.429072) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.429094) EVENT_LOG_v1 {"time_micros": 1763978622429087, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.429116) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 6266902, prev total WAL file size 6266902, number of live WAL files 2.
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.431125) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(3953KB)], [60(11MB)]
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978622431162, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 16211463, "oldest_snapshot_seqno": -1}
Nov 24 10:03:42 compute-1 sshd-session[243197]: Invalid user nsrecover from 164.92.213.168 port 37690
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 6259 keys, 14109730 bytes, temperature: kUnknown
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978622534010, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 14109730, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14068685, "index_size": 24295, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 160415, "raw_average_key_size": 25, "raw_value_size": 13956841, "raw_average_value_size": 2229, "num_data_blocks": 974, "num_entries": 6259, "num_filter_entries": 6259, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763978622, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.534907) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 14109730 bytes
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.538908) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.4 rd, 137.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 11.6 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(7.5) write-amplify(3.5) OK, records in: 6779, records dropped: 520 output_compression: NoCompression
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.538931) EVENT_LOG_v1 {"time_micros": 1763978622538921, "job": 36, "event": "compaction_finished", "compaction_time_micros": 102977, "compaction_time_cpu_micros": 29717, "output_level": 6, "num_output_files": 1, "total_output_size": 14109730, "num_input_records": 6779, "num_output_records": 6259, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978622540250, "job": 36, "event": "table_file_deletion", "file_number": 62}
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978622543851, "job": 36, "event": "table_file_deletion", "file_number": 60}
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.431040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.543956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.543964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.543965) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.543967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:03:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.543968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:03:42 compute-1 sshd-session[243197]: Connection closed by invalid user nsrecover 164.92.213.168 port 37690 [preauth]
Nov 24 10:03:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:42.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:03:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:43.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:03:44 compute-1 podman[243200]: 2025-11-24 10:03:44.360641077 +0000 UTC m=+0.087875483 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 10:03:44 compute-1 ceph-mon[80009]: pgmap v1109: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.7 KiB/s wr, 57 op/s
Nov 24 10:03:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:03:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:44.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:45.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:03:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:03:46 compute-1 nova_compute[230010]: 2025-11-24 10:03:46.212 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:46 compute-1 ceph-mon[80009]: pgmap v1110: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 24 10:03:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:03:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:46.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:47.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:47 compute-1 nova_compute[230010]: 2025-11-24 10:03:47.362 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:48 compute-1 ceph-mon[80009]: pgmap v1111: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 24 10:03:48 compute-1 sudo[243222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:03:48 compute-1 sudo[243222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:03:48 compute-1 sudo[243222]: pam_unix(sudo:session): session closed for user root
Nov 24 10:03:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:48.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:49.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:03:50 compute-1 podman[243248]: 2025-11-24 10:03:50.371703155 +0000 UTC m=+0.110506947 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 24 10:03:50 compute-1 ceph-mon[80009]: pgmap v1112: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:03:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:50.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:51 compute-1 nova_compute[230010]: 2025-11-24 10:03:51.213 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:51.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:51 compute-1 nova_compute[230010]: 2025-11-24 10:03:51.556 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "89909dc1-a7db-4cca-b837-5340532de97b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:51 compute-1 nova_compute[230010]: 2025-11-24 10:03:51.556 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:51 compute-1 nova_compute[230010]: 2025-11-24 10:03:51.570 230014 DEBUG nova.compute.manager [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 10:03:51 compute-1 nova_compute[230010]: 2025-11-24 10:03:51.654 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:51 compute-1 nova_compute[230010]: 2025-11-24 10:03:51.655 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:51 compute-1 nova_compute[230010]: 2025-11-24 10:03:51.661 230014 DEBUG nova.virt.hardware [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 10:03:51 compute-1 nova_compute[230010]: 2025-11-24 10:03:51.662 230014 INFO nova.compute.claims [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Claim successful on node compute-1.ctlplane.example.com
Nov 24 10:03:51 compute-1 nova_compute[230010]: 2025-11-24 10:03:51.778 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:03:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:03:52 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3936690340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:03:52 compute-1 nova_compute[230010]: 2025-11-24 10:03:52.185 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:03:52 compute-1 nova_compute[230010]: 2025-11-24 10:03:52.191 230014 DEBUG nova.compute.provider_tree [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:03:52 compute-1 nova_compute[230010]: 2025-11-24 10:03:52.203 230014 DEBUG nova.scheduler.client.report [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:03:52 compute-1 nova_compute[230010]: 2025-11-24 10:03:52.222 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:52 compute-1 nova_compute[230010]: 2025-11-24 10:03:52.224 230014 DEBUG nova.compute.manager [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 10:03:52 compute-1 nova_compute[230010]: 2025-11-24 10:03:52.273 230014 DEBUG nova.compute.manager [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 10:03:52 compute-1 nova_compute[230010]: 2025-11-24 10:03:52.274 230014 DEBUG nova.network.neutron [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 10:03:52 compute-1 nova_compute[230010]: 2025-11-24 10:03:52.303 230014 INFO nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 10:03:52 compute-1 nova_compute[230010]: 2025-11-24 10:03:52.321 230014 DEBUG nova.compute.manager [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 10:03:52 compute-1 nova_compute[230010]: 2025-11-24 10:03:52.364 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:52 compute-1 nova_compute[230010]: 2025-11-24 10:03:52.406 230014 DEBUG nova.compute.manager [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 10:03:52 compute-1 nova_compute[230010]: 2025-11-24 10:03:52.407 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 10:03:52 compute-1 nova_compute[230010]: 2025-11-24 10:03:52.408 230014 INFO nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Creating image(s)
Nov 24 10:03:52 compute-1 nova_compute[230010]: 2025-11-24 10:03:52.431 230014 DEBUG nova.storage.rbd_utils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 89909dc1-a7db-4cca-b837-5340532de97b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:03:52 compute-1 nova_compute[230010]: 2025-11-24 10:03:52.462 230014 DEBUG nova.storage.rbd_utils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 89909dc1-a7db-4cca-b837-5340532de97b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:03:52 compute-1 nova_compute[230010]: 2025-11-24 10:03:52.491 230014 DEBUG nova.storage.rbd_utils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 89909dc1-a7db-4cca-b837-5340532de97b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:03:52 compute-1 nova_compute[230010]: 2025-11-24 10:03:52.494 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:03:52 compute-1 nova_compute[230010]: 2025-11-24 10:03:52.534 230014 DEBUG nova.policy [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '43f79ff3105e4372a3c095e8057d4f1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '94d069fc040647d5a6e54894eec915fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 10:03:52 compute-1 nova_compute[230010]: 2025-11-24 10:03:52.555 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:03:52 compute-1 nova_compute[230010]: 2025-11-24 10:03:52.556 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "2ed5c667523487159c4c4503c82babbc95dbae40" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:52 compute-1 nova_compute[230010]: 2025-11-24 10:03:52.556 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:52 compute-1 nova_compute[230010]: 2025-11-24 10:03:52.557 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:52 compute-1 nova_compute[230010]: 2025-11-24 10:03:52.625 230014 DEBUG nova.storage.rbd_utils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 89909dc1-a7db-4cca-b837-5340532de97b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:03:52 compute-1 nova_compute[230010]: 2025-11-24 10:03:52.629 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 89909dc1-a7db-4cca-b837-5340532de97b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:03:52 compute-1 ceph-mon[80009]: pgmap v1113: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:03:52 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3936690340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:03:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:52.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:52 compute-1 nova_compute[230010]: 2025-11-24 10:03:52.966 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 89909dc1-a7db-4cca-b837-5340532de97b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:03:53 compute-1 nova_compute[230010]: 2025-11-24 10:03:53.021 230014 DEBUG nova.storage.rbd_utils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] resizing rbd image 89909dc1-a7db-4cca-b837-5340532de97b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 24 10:03:53 compute-1 nova_compute[230010]: 2025-11-24 10:03:53.133 230014 DEBUG nova.objects.instance [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'migration_context' on Instance uuid 89909dc1-a7db-4cca-b837-5340532de97b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 10:03:53 compute-1 nova_compute[230010]: 2025-11-24 10:03:53.146 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 10:03:53 compute-1 nova_compute[230010]: 2025-11-24 10:03:53.147 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Ensure instance console log exists: /var/lib/nova/instances/89909dc1-a7db-4cca-b837-5340532de97b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 10:03:53 compute-1 nova_compute[230010]: 2025-11-24 10:03:53.147 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:53 compute-1 nova_compute[230010]: 2025-11-24 10:03:53.148 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:53 compute-1 nova_compute[230010]: 2025-11-24 10:03:53.148 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:53 compute-1 nova_compute[230010]: 2025-11-24 10:03:53.348 230014 DEBUG nova.network.neutron [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Successfully created port: 891e7944-832b-408f-b645-6f51de733021 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 24 10:03:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:53.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:53 compute-1 ceph-mon[80009]: pgmap v1114: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:03:54 compute-1 nova_compute[230010]: 2025-11-24 10:03:54.329 230014 DEBUG nova.network.neutron [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Successfully updated port: 891e7944-832b-408f-b645-6f51de733021 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 10:03:54 compute-1 nova_compute[230010]: 2025-11-24 10:03:54.342 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:03:54 compute-1 nova_compute[230010]: 2025-11-24 10:03:54.342 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:03:54 compute-1 nova_compute[230010]: 2025-11-24 10:03:54.342 230014 DEBUG nova.network.neutron [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 10:03:54 compute-1 nova_compute[230010]: 2025-11-24 10:03:54.425 230014 DEBUG nova.compute.manager [req-e7981b58-a0dc-4ae8-aba3-16540414f4a0 req-1b964b4f-105a-4507-a7d8-17a2858bca47 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Received event network-changed-891e7944-832b-408f-b645-6f51de733021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:03:54 compute-1 nova_compute[230010]: 2025-11-24 10:03:54.426 230014 DEBUG nova.compute.manager [req-e7981b58-a0dc-4ae8-aba3-16540414f4a0 req-1b964b4f-105a-4507-a7d8-17a2858bca47 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Refreshing instance network info cache due to event network-changed-891e7944-832b-408f-b645-6f51de733021. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 10:03:54 compute-1 nova_compute[230010]: 2025-11-24 10:03:54.426 230014 DEBUG oslo_concurrency.lockutils [req-e7981b58-a0dc-4ae8-aba3-16540414f4a0 req-1b964b4f-105a-4507-a7d8-17a2858bca47 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:03:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:03:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:54.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:55.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:55 compute-1 nova_compute[230010]: 2025-11-24 10:03:55.509 230014 DEBUG nova.network.neutron [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 10:03:56 compute-1 ceph-mon[80009]: pgmap v1115: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:03:56 compute-1 nova_compute[230010]: 2025-11-24 10:03:56.214 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:03:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:56.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:03:57 compute-1 podman[243465]: 2025-11-24 10:03:57.305670102 +0000 UTC m=+0.049769060 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 10:03:57 compute-1 nova_compute[230010]: 2025-11-24 10:03:57.367 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:57.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:58 compute-1 ceph-mon[80009]: pgmap v1116: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:03:58 compute-1 nova_compute[230010]: 2025-11-24 10:03:58.696 230014 DEBUG nova.network.neutron [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Updating instance_info_cache with network_info: [{"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:03:58 compute-1 nova_compute[230010]: 2025-11-24 10:03:58.711 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:03:58 compute-1 nova_compute[230010]: 2025-11-24 10:03:58.711 230014 DEBUG nova.compute.manager [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Instance network_info: |[{"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 10:03:58 compute-1 nova_compute[230010]: 2025-11-24 10:03:58.712 230014 DEBUG oslo_concurrency.lockutils [req-e7981b58-a0dc-4ae8-aba3-16540414f4a0 req-1b964b4f-105a-4507-a7d8-17a2858bca47 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:03:58 compute-1 nova_compute[230010]: 2025-11-24 10:03:58.712 230014 DEBUG nova.network.neutron [req-e7981b58-a0dc-4ae8-aba3-16540414f4a0 req-1b964b4f-105a-4507-a7d8-17a2858bca47 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Refreshing network info cache for port 891e7944-832b-408f-b645-6f51de733021 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 10:03:58 compute-1 nova_compute[230010]: 2025-11-24 10:03:58.715 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Start _get_guest_xml network_info=[{"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '6ef14bdf-4f04-4400-8040-4409d9d5271e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 10:03:58 compute-1 nova_compute[230010]: 2025-11-24 10:03:58.719 230014 WARNING nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:03:58 compute-1 nova_compute[230010]: 2025-11-24 10:03:58.724 230014 DEBUG nova.virt.libvirt.host [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 10:03:58 compute-1 nova_compute[230010]: 2025-11-24 10:03:58.724 230014 DEBUG nova.virt.libvirt.host [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 10:03:58 compute-1 nova_compute[230010]: 2025-11-24 10:03:58.728 230014 DEBUG nova.virt.libvirt.host [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 10:03:58 compute-1 nova_compute[230010]: 2025-11-24 10:03:58.728 230014 DEBUG nova.virt.libvirt.host [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 10:03:58 compute-1 nova_compute[230010]: 2025-11-24 10:03:58.729 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 10:03:58 compute-1 nova_compute[230010]: 2025-11-24 10:03:58.729 230014 DEBUG nova.virt.hardware [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T09:52:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='4a5d03ad-925b-45f1-89bd-f1325f9f3292',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 10:03:58 compute-1 nova_compute[230010]: 2025-11-24 10:03:58.729 230014 DEBUG nova.virt.hardware [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 10:03:58 compute-1 nova_compute[230010]: 2025-11-24 10:03:58.730 230014 DEBUG nova.virt.hardware [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 10:03:58 compute-1 nova_compute[230010]: 2025-11-24 10:03:58.730 230014 DEBUG nova.virt.hardware [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 10:03:58 compute-1 nova_compute[230010]: 2025-11-24 10:03:58.730 230014 DEBUG nova.virt.hardware [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 10:03:58 compute-1 nova_compute[230010]: 2025-11-24 10:03:58.730 230014 DEBUG nova.virt.hardware [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 10:03:58 compute-1 nova_compute[230010]: 2025-11-24 10:03:58.731 230014 DEBUG nova.virt.hardware [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 10:03:58 compute-1 nova_compute[230010]: 2025-11-24 10:03:58.731 230014 DEBUG nova.virt.hardware [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 10:03:58 compute-1 nova_compute[230010]: 2025-11-24 10:03:58.731 230014 DEBUG nova.virt.hardware [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 10:03:58 compute-1 nova_compute[230010]: 2025-11-24 10:03:58.731 230014 DEBUG nova.virt.hardware [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 10:03:58 compute-1 nova_compute[230010]: 2025-11-24 10:03:58.732 230014 DEBUG nova.virt.hardware [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 10:03:58 compute-1 nova_compute[230010]: 2025-11-24 10:03:58.734 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:03:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:58.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:58 compute-1 sudo[243505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:03:58 compute-1 sudo[243505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:03:58 compute-1 sudo[243505]: pam_unix(sudo:session): session closed for user root
Nov 24 10:03:58 compute-1 sudo[243530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:03:58 compute-1 sudo[243530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:03:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 10:03:59 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3514165054' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.184 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.212 230014 DEBUG nova.storage.rbd_utils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 89909dc1-a7db-4cca-b837-5340532de97b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.217 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:03:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:03:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:59.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:03:59 compute-1 sudo[243530]: pam_unix(sudo:session): session closed for user root
Nov 24 10:03:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 10:03:59 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3484212954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.700 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.703 230014 DEBUG nova.virt.libvirt.vif [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T10:03:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1741107609',display_name='tempest-TestNetworkBasicOps-server-1741107609',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1741107609',id=13,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGYO1n2WM+59O3PRTf5fCo1d78/BH3Mc8BBXRdPASueO+JvuIAgEpEuVwsO0rsx8rIXsxHGWMhGFwwjbkrft3uNRj4gBBGDnbQiVDk9hyHkutBhfgKKfMw5qeDHykomezA==',key_name='tempest-TestNetworkBasicOps-1685206173',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-pxhddr0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T10:03:52Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=89909dc1-a7db-4cca-b837-5340532de97b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.703 230014 DEBUG nova.network.os_vif_util [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.705 230014 DEBUG nova.network.os_vif_util [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:16:2d,bridge_name='br-int',has_traffic_filtering=True,id=891e7944-832b-408f-b645-6f51de733021,network=Network(22748050-40a9-4373-8c95-5da36c909edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap891e7944-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.706 230014 DEBUG nova.objects.instance [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'pci_devices' on Instance uuid 89909dc1-a7db-4cca-b837-5340532de97b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.719 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] End _get_guest_xml xml=<domain type="kvm">
Nov 24 10:03:59 compute-1 nova_compute[230010]:   <uuid>89909dc1-a7db-4cca-b837-5340532de97b</uuid>
Nov 24 10:03:59 compute-1 nova_compute[230010]:   <name>instance-0000000d</name>
Nov 24 10:03:59 compute-1 nova_compute[230010]:   <memory>131072</memory>
Nov 24 10:03:59 compute-1 nova_compute[230010]:   <vcpu>1</vcpu>
Nov 24 10:03:59 compute-1 nova_compute[230010]:   <metadata>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <nova:name>tempest-TestNetworkBasicOps-server-1741107609</nova:name>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <nova:creationTime>2025-11-24 10:03:58</nova:creationTime>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <nova:flavor name="m1.nano">
Nov 24 10:03:59 compute-1 nova_compute[230010]:         <nova:memory>128</nova:memory>
Nov 24 10:03:59 compute-1 nova_compute[230010]:         <nova:disk>1</nova:disk>
Nov 24 10:03:59 compute-1 nova_compute[230010]:         <nova:swap>0</nova:swap>
Nov 24 10:03:59 compute-1 nova_compute[230010]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 10:03:59 compute-1 nova_compute[230010]:         <nova:vcpus>1</nova:vcpus>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       </nova:flavor>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <nova:owner>
Nov 24 10:03:59 compute-1 nova_compute[230010]:         <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 10:03:59 compute-1 nova_compute[230010]:         <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       </nova:owner>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <nova:ports>
Nov 24 10:03:59 compute-1 nova_compute[230010]:         <nova:port uuid="891e7944-832b-408f-b645-6f51de733021">
Nov 24 10:03:59 compute-1 nova_compute[230010]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:         </nova:port>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       </nova:ports>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     </nova:instance>
Nov 24 10:03:59 compute-1 nova_compute[230010]:   </metadata>
Nov 24 10:03:59 compute-1 nova_compute[230010]:   <sysinfo type="smbios">
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <system>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <entry name="manufacturer">RDO</entry>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <entry name="product">OpenStack Compute</entry>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <entry name="serial">89909dc1-a7db-4cca-b837-5340532de97b</entry>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <entry name="uuid">89909dc1-a7db-4cca-b837-5340532de97b</entry>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <entry name="family">Virtual Machine</entry>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     </system>
Nov 24 10:03:59 compute-1 nova_compute[230010]:   </sysinfo>
Nov 24 10:03:59 compute-1 nova_compute[230010]:   <os>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <boot dev="hd"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <smbios mode="sysinfo"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:   </os>
Nov 24 10:03:59 compute-1 nova_compute[230010]:   <features>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <acpi/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <apic/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <vmcoreinfo/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:   </features>
Nov 24 10:03:59 compute-1 nova_compute[230010]:   <clock offset="utc">
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <timer name="hpet" present="no"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:   </clock>
Nov 24 10:03:59 compute-1 nova_compute[230010]:   <cpu mode="host-model" match="exact">
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:   </cpu>
Nov 24 10:03:59 compute-1 nova_compute[230010]:   <devices>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <disk type="network" device="disk">
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <driver type="raw" cache="none"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <source protocol="rbd" name="vms/89909dc1-a7db-4cca-b837-5340532de97b_disk">
Nov 24 10:03:59 compute-1 nova_compute[230010]:         <host name="192.168.122.100" port="6789"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:         <host name="192.168.122.102" port="6789"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:         <host name="192.168.122.101" port="6789"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       </source>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <auth username="openstack">
Nov 24 10:03:59 compute-1 nova_compute[230010]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       </auth>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <target dev="vda" bus="virtio"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     </disk>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <disk type="network" device="cdrom">
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <driver type="raw" cache="none"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <source protocol="rbd" name="vms/89909dc1-a7db-4cca-b837-5340532de97b_disk.config">
Nov 24 10:03:59 compute-1 nova_compute[230010]:         <host name="192.168.122.100" port="6789"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:         <host name="192.168.122.102" port="6789"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:         <host name="192.168.122.101" port="6789"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       </source>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <auth username="openstack">
Nov 24 10:03:59 compute-1 nova_compute[230010]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       </auth>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <target dev="sda" bus="sata"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     </disk>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <interface type="ethernet">
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <mac address="fa:16:3e:a8:16:2d"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <model type="virtio"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <mtu size="1442"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <target dev="tap891e7944-83"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     </interface>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <serial type="pty">
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <log file="/var/lib/nova/instances/89909dc1-a7db-4cca-b837-5340532de97b/console.log" append="off"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     </serial>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <video>
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <model type="virtio"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     </video>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <input type="tablet" bus="usb"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <rng model="virtio">
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <backend model="random">/dev/urandom</backend>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     </rng>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <controller type="usb" index="0"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     <memballoon model="virtio">
Nov 24 10:03:59 compute-1 nova_compute[230010]:       <stats period="10"/>
Nov 24 10:03:59 compute-1 nova_compute[230010]:     </memballoon>
Nov 24 10:03:59 compute-1 nova_compute[230010]:   </devices>
Nov 24 10:03:59 compute-1 nova_compute[230010]: </domain>
Nov 24 10:03:59 compute-1 nova_compute[230010]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.721 230014 DEBUG nova.compute.manager [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Preparing to wait for external event network-vif-plugged-891e7944-832b-408f-b645-6f51de733021 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.721 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "89909dc1-a7db-4cca-b837-5340532de97b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.722 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.722 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.722 230014 DEBUG nova.virt.libvirt.vif [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T10:03:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1741107609',display_name='tempest-TestNetworkBasicOps-server-1741107609',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1741107609',id=13,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGYO1n2WM+59O3PRTf5fCo1d78/BH3Mc8BBXRdPASueO+JvuIAgEpEuVwsO0rsx8rIXsxHGWMhGFwwjbkrft3uNRj4gBBGDnbQiVDk9hyHkutBhfgKKfMw5qeDHykomezA==',key_name='tempest-TestNetworkBasicOps-1685206173',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-pxhddr0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T10:03:52Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=89909dc1-a7db-4cca-b837-5340532de97b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.723 230014 DEBUG nova.network.os_vif_util [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.723 230014 DEBUG nova.network.os_vif_util [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:16:2d,bridge_name='br-int',has_traffic_filtering=True,id=891e7944-832b-408f-b645-6f51de733021,network=Network(22748050-40a9-4373-8c95-5da36c909edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap891e7944-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.724 230014 DEBUG os_vif [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:16:2d,bridge_name='br-int',has_traffic_filtering=True,id=891e7944-832b-408f-b645-6f51de733021,network=Network(22748050-40a9-4373-8c95-5da36c909edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap891e7944-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.724 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.725 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.725 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.728 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.728 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap891e7944-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.728 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap891e7944-83, col_values=(('external_ids', {'iface-id': '891e7944-832b-408f-b645-6f51de733021', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:16:2d', 'vm-uuid': '89909dc1-a7db-4cca-b837-5340532de97b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.730 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:59 compute-1 NetworkManager[48870]: <info>  [1763978639.7308] manager: (tap891e7944-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.732 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.743 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.744 230014 INFO os_vif [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:16:2d,bridge_name='br-int',has_traffic_filtering=True,id=891e7944-832b-408f-b645-6f51de733021,network=Network(22748050-40a9-4373-8c95-5da36c909edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap891e7944-83')
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.792 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.793 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.793 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:a8:16:2d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.794 230014 INFO nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Using config drive
Nov 24 10:03:59 compute-1 nova_compute[230010]: 2025-11-24 10:03:59.821 230014 DEBUG nova.storage.rbd_utils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 89909dc1-a7db-4cca-b837-5340532de97b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:04:00 compute-1 ceph-mon[80009]: pgmap v1117: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:04:00 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3514165054' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:04:00 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3484212954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:04:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:04:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:04:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:04:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:00.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:04:01 compute-1 nova_compute[230010]: 2025-11-24 10:04:01.209 230014 INFO nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Creating config drive at /var/lib/nova/instances/89909dc1-a7db-4cca-b837-5340532de97b/disk.config
Nov 24 10:04:01 compute-1 nova_compute[230010]: 2025-11-24 10:04:01.215 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/89909dc1-a7db-4cca-b837-5340532de97b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgzomxnnk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:04:01 compute-1 nova_compute[230010]: 2025-11-24 10:04:01.233 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:04:01 compute-1 nova_compute[230010]: 2025-11-24 10:04:01.342 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/89909dc1-a7db-4cca-b837-5340532de97b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgzomxnnk" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:04:01 compute-1 nova_compute[230010]: 2025-11-24 10:04:01.373 230014 DEBUG nova.storage.rbd_utils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 89909dc1-a7db-4cca-b837-5340532de97b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:04:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:01.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:01 compute-1 nova_compute[230010]: 2025-11-24 10:04:01.380 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/89909dc1-a7db-4cca-b837-5340532de97b/disk.config 89909dc1-a7db-4cca-b837-5340532de97b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:04:01 compute-1 nova_compute[230010]: 2025-11-24 10:04:01.550 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/89909dc1-a7db-4cca-b837-5340532de97b/disk.config 89909dc1-a7db-4cca-b837-5340532de97b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:04:01 compute-1 nova_compute[230010]: 2025-11-24 10:04:01.551 230014 INFO nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Deleting local config drive /var/lib/nova/instances/89909dc1-a7db-4cca-b837-5340532de97b/disk.config because it was imported into RBD.
Nov 24 10:04:01 compute-1 kernel: tap891e7944-83: entered promiscuous mode
Nov 24 10:04:01 compute-1 NetworkManager[48870]: <info>  [1763978641.6151] manager: (tap891e7944-83): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Nov 24 10:04:01 compute-1 ovn_controller[132966]: 2025-11-24T10:04:01Z|00096|binding|INFO|Claiming lport 891e7944-832b-408f-b645-6f51de733021 for this chassis.
Nov 24 10:04:01 compute-1 ovn_controller[132966]: 2025-11-24T10:04:01Z|00097|binding|INFO|891e7944-832b-408f-b645-6f51de733021: Claiming fa:16:3e:a8:16:2d 10.100.0.4
Nov 24 10:04:01 compute-1 nova_compute[230010]: 2025-11-24 10:04:01.616 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:01 compute-1 nova_compute[230010]: 2025-11-24 10:04:01.624 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.641 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:16:2d 10.100.0.4'], port_security=['fa:16:3e:a8:16:2d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '89909dc1-a7db-4cca-b837-5340532de97b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22748050-40a9-4373-8c95-5da36c909edc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6dece4c3-fa7a-42ae-8b29-e0f3dfabd71c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72482cca-2f03-4eb7-ab95-968e79999420, chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=891e7944-832b-408f-b645-6f51de733021) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 10:04:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.642 142336 INFO neutron.agent.ovn.metadata.agent [-] Port 891e7944-832b-408f-b645-6f51de733021 in datapath 22748050-40a9-4373-8c95-5da36c909edc bound to our chassis
Nov 24 10:04:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.643 142336 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 22748050-40a9-4373-8c95-5da36c909edc
Nov 24 10:04:01 compute-1 systemd-machined[193537]: New machine qemu-6-instance-0000000d.
Nov 24 10:04:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.657 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[e0072ee3-6398-4504-8b5c-093afdb230a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:04:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.659 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap22748050-41 in ovnmeta-22748050-40a9-4373-8c95-5da36c909edc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 24 10:04:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.665 234803 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap22748050-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 24 10:04:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.665 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[6c146692-fdcd-4565-86ca-aaeb1e8bf414]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:04:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.667 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[6e09b760-01fd-46cc-9b00-3ef041e8081d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:04:01 compute-1 nova_compute[230010]: 2025-11-24 10:04:01.684 230014 DEBUG nova.network.neutron [req-e7981b58-a0dc-4ae8-aba3-16540414f4a0 req-1b964b4f-105a-4507-a7d8-17a2858bca47 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Updated VIF entry in instance network info cache for port 891e7944-832b-408f-b645-6f51de733021. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 10:04:01 compute-1 nova_compute[230010]: 2025-11-24 10:04:01.685 230014 DEBUG nova.network.neutron [req-e7981b58-a0dc-4ae8-aba3-16540414f4a0 req-1b964b4f-105a-4507-a7d8-17a2858bca47 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Updating instance_info_cache with network_info: [{"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:04:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.685 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[28c997e2-f2ca-4255-8832-200b93d06bd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:04:01 compute-1 ovn_controller[132966]: 2025-11-24T10:04:01Z|00098|binding|INFO|Setting lport 891e7944-832b-408f-b645-6f51de733021 ovn-installed in OVS
Nov 24 10:04:01 compute-1 ovn_controller[132966]: 2025-11-24T10:04:01Z|00099|binding|INFO|Setting lport 891e7944-832b-408f-b645-6f51de733021 up in Southbound
Nov 24 10:04:01 compute-1 nova_compute[230010]: 2025-11-24 10:04:01.690 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:01 compute-1 systemd[1]: Started Virtual Machine qemu-6-instance-0000000d.
Nov 24 10:04:01 compute-1 nova_compute[230010]: 2025-11-24 10:04:01.703 230014 DEBUG oslo_concurrency.lockutils [req-e7981b58-a0dc-4ae8-aba3-16540414f4a0 req-1b964b4f-105a-4507-a7d8-17a2858bca47 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:04:01 compute-1 systemd-udevd[243705]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 10:04:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.718 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[6608d886-dd11-4223-b074-bbe938401c56]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:04:01 compute-1 NetworkManager[48870]: <info>  [1763978641.7349] device (tap891e7944-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 10:04:01 compute-1 NetworkManager[48870]: <info>  [1763978641.7366] device (tap891e7944-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 10:04:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.756 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[53757821-f549-4f53-95c7-647119c3e904]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:04:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.763 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[ce205dea-ea10-4a5d-b678-a1153625aecf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:04:01 compute-1 NetworkManager[48870]: <info>  [1763978641.7647] manager: (tap22748050-40): new Veth device (/org/freedesktop/NetworkManager/Devices/65)
Nov 24 10:04:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.797 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[64589377-f914-4bdb-b541-e31ef8a0e557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:04:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.799 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[2c60f8f8-9a1d-4313-8d57-e4edb9f25952]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:04:01 compute-1 NetworkManager[48870]: <info>  [1763978641.8250] device (tap22748050-40): carrier: link connected
Nov 24 10:04:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.830 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[6a7149ea-abfe-4939-80fc-e00b05ee943d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:04:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.849 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[21e2b08c-77e8-4031-94f0-6c99a4c6dc01]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap22748050-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:0f:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458537, 'reachable_time': 18634, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243735, 'error': None, 'target': 'ovnmeta-22748050-40a9-4373-8c95-5da36c909edc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:04:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.864 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[b28fa3a8-fc9a-4463-b59f-a4dcd516532b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:f06'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458537, 'tstamp': 458537}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243736, 'error': None, 'target': 'ovnmeta-22748050-40a9-4373-8c95-5da36c909edc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:04:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.886 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[1a58ff42-afd6-48f7-b678-519cb12bbcf9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap22748050-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:0f:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458537, 'reachable_time': 18634, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 243737, 'error': None, 'target': 'ovnmeta-22748050-40a9-4373-8c95-5da36c909edc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:04:01 compute-1 nova_compute[230010]: 2025-11-24 10:04:01.914 230014 DEBUG nova.compute.manager [req-4133837f-be31-4099-9855-37471ed8067f req-988c9255-8a34-427b-9f40-373dfdb0b360 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Received event network-vif-plugged-891e7944-832b-408f-b645-6f51de733021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:04:01 compute-1 nova_compute[230010]: 2025-11-24 10:04:01.914 230014 DEBUG oslo_concurrency.lockutils [req-4133837f-be31-4099-9855-37471ed8067f req-988c9255-8a34-427b-9f40-373dfdb0b360 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "89909dc1-a7db-4cca-b837-5340532de97b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:04:01 compute-1 nova_compute[230010]: 2025-11-24 10:04:01.915 230014 DEBUG oslo_concurrency.lockutils [req-4133837f-be31-4099-9855-37471ed8067f req-988c9255-8a34-427b-9f40-373dfdb0b360 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:04:01 compute-1 nova_compute[230010]: 2025-11-24 10:04:01.915 230014 DEBUG oslo_concurrency.lockutils [req-4133837f-be31-4099-9855-37471ed8067f req-988c9255-8a34-427b-9f40-373dfdb0b360 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:04:01 compute-1 nova_compute[230010]: 2025-11-24 10:04:01.916 230014 DEBUG nova.compute.manager [req-4133837f-be31-4099-9855-37471ed8067f req-988c9255-8a34-427b-9f40-373dfdb0b360 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Processing event network-vif-plugged-891e7944-832b-408f-b645-6f51de733021 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 10:04:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.916 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[8b1b53f9-1ba0-4799-a319-8f27b9a9bd16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:04:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 10:04:01 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/271928424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:04:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 10:04:01 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/271928424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:04:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.987 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[f2061f2f-8c44-43d9-8f2d-06634ab35870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:04:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.989 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22748050-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:04:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.989 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 10:04:01 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.990 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap22748050-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:04:02 compute-1 kernel: tap22748050-40: entered promiscuous mode
Nov 24 10:04:02 compute-1 NetworkManager[48870]: <info>  [1763978642.0280] manager: (tap22748050-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.024 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:02.035 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap22748050-40, col_values=(('external_ids', {'iface-id': 'c9e1a544-3313-45b9-9f1e-5b8ba7d7cc61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:04:02 compute-1 ovn_controller[132966]: 2025-11-24T10:04:02Z|00100|binding|INFO|Releasing lport c9e1a544-3313-45b9-9f1e-5b8ba7d7cc61 from this chassis (sb_readonly=0)
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.038 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:02.039 142336 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/22748050-40a9-4373-8c95-5da36c909edc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/22748050-40a9-4373-8c95-5da36c909edc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:02.039 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[80743c0f-e055-4295-a48a-e345230442bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:02.041 142336 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]: global
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]:     log         /dev/log local0 debug
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]:     log-tag     haproxy-metadata-proxy-22748050-40a9-4373-8c95-5da36c909edc
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]:     user        root
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]:     group       root
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]:     maxconn     1024
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]:     pidfile     /var/lib/neutron/external/pids/22748050-40a9-4373-8c95-5da36c909edc.pid.haproxy
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]:     daemon
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]: 
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]: defaults
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]:     log global
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]:     mode http
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]:     option httplog
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]:     option dontlognull
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]:     option http-server-close
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]:     option forwardfor
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]:     retries                 3
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]:     timeout http-request    30s
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]:     timeout connect         30s
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]:     timeout client          32s
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]:     timeout server          32s
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]:     timeout http-keep-alive 30s
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]: 
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]: 
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]: listen listener
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]:     bind 169.254.169.254:80
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]:     server metadata /var/lib/neutron/metadata_proxy
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]:     http-request add-header X-OVN-Network-ID 22748050-40a9-4373-8c95-5da36c909edc
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 24 10:04:02 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:02.042 142336 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-22748050-40a9-4373-8c95-5da36c909edc', 'env', 'PROCESS_TAG=haproxy-22748050-40a9-4373-8c95-5da36c909edc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/22748050-40a9-4373-8c95-5da36c909edc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.052 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.248 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978642.24744, 89909dc1-a7db-4cca-b837-5340532de97b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.248 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] VM Started (Lifecycle Event)
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.250 230014 DEBUG nova.compute.manager [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.254 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.258 230014 INFO nova.virt.libvirt.driver [-] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Instance spawned successfully.
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.258 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 10:04:02 compute-1 ceph-mon[80009]: pgmap v1118: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:04:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/271928424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:04:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/271928424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.266 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.272 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.275 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.275 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.275 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.276 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.276 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.276 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:04:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:04:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.308 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.309 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978642.247672, 89909dc1-a7db-4cca-b837-5340532de97b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.309 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] VM Paused (Lifecycle Event)
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.335 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.338 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978642.253599, 89909dc1-a7db-4cca-b837-5340532de97b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.339 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] VM Resumed (Lifecycle Event)
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.347 230014 INFO nova.compute.manager [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Took 9.94 seconds to spawn the instance on the hypervisor.
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.347 230014 DEBUG nova.compute.manager [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.355 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.358 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.383 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.411 230014 INFO nova.compute.manager [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Took 10.80 seconds to build instance.
Nov 24 10:04:02 compute-1 podman[243812]: 2025-11-24 10:04:02.416073536 +0000 UTC m=+0.048642772 container create 886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:04:02 compute-1 nova_compute[230010]: 2025-11-24 10:04:02.426 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:04:02 compute-1 systemd[1]: Started libpod-conmon-886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2.scope.
Nov 24 10:04:02 compute-1 systemd[1]: Started libcrun container.
Nov 24 10:04:02 compute-1 podman[243812]: 2025-11-24 10:04:02.389364391 +0000 UTC m=+0.021933667 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 10:04:02 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ce70e599b070ad1e348a0dc736d83aebd11dabe2621d209a81daa70e66a1ce/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 10:04:02 compute-1 podman[243812]: 2025-11-24 10:04:02.50036473 +0000 UTC m=+0.132934006 container init 886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 10:04:02 compute-1 podman[243812]: 2025-11-24 10:04:02.506308805 +0000 UTC m=+0.138878051 container start 886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 24 10:04:02 compute-1 neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc[243827]: [NOTICE]   (243831) : New worker (243833) forked
Nov 24 10:04:02 compute-1 neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc[243827]: [NOTICE]   (243831) : Loading success.
Nov 24 10:04:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:04:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:02.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:04:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:04:02 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:04:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 10:04:02 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:04:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:04:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:04:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 10:04:02 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:04:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 10:04:02 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:04:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:04:02 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:04:03 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:04:03 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:04:03 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:04:03 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:04:03 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:04:03 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:04:03 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:04:03 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:04:03 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:04:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:03.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:03 compute-1 nova_compute[230010]: 2025-11-24 10:04:03.997 230014 DEBUG nova.compute.manager [req-44d328e2-9de2-4aa2-a12f-56f771c4c26d req-c64d57e2-3c0b-454e-b85d-6bf7e2392d50 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Received event network-vif-plugged-891e7944-832b-408f-b645-6f51de733021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:04:03 compute-1 nova_compute[230010]: 2025-11-24 10:04:03.997 230014 DEBUG oslo_concurrency.lockutils [req-44d328e2-9de2-4aa2-a12f-56f771c4c26d req-c64d57e2-3c0b-454e-b85d-6bf7e2392d50 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "89909dc1-a7db-4cca-b837-5340532de97b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:04:03 compute-1 nova_compute[230010]: 2025-11-24 10:04:03.997 230014 DEBUG oslo_concurrency.lockutils [req-44d328e2-9de2-4aa2-a12f-56f771c4c26d req-c64d57e2-3c0b-454e-b85d-6bf7e2392d50 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:04:03 compute-1 nova_compute[230010]: 2025-11-24 10:04:03.998 230014 DEBUG oslo_concurrency.lockutils [req-44d328e2-9de2-4aa2-a12f-56f771c4c26d req-c64d57e2-3c0b-454e-b85d-6bf7e2392d50 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:04:03 compute-1 nova_compute[230010]: 2025-11-24 10:04:03.998 230014 DEBUG nova.compute.manager [req-44d328e2-9de2-4aa2-a12f-56f771c4c26d req-c64d57e2-3c0b-454e-b85d-6bf7e2392d50 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] No waiting events found dispatching network-vif-plugged-891e7944-832b-408f-b645-6f51de733021 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:04:03 compute-1 nova_compute[230010]: 2025-11-24 10:04:03.998 230014 WARNING nova.compute.manager [req-44d328e2-9de2-4aa2-a12f-56f771c4c26d req-c64d57e2-3c0b-454e-b85d-6bf7e2392d50 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Received unexpected event network-vif-plugged-891e7944-832b-408f-b645-6f51de733021 for instance with vm_state active and task_state None.
Nov 24 10:04:04 compute-1 ceph-mon[80009]: pgmap v1119: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:04:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:04:04 compute-1 nova_compute[230010]: 2025-11-24 10:04:04.731 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:04.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:05.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:05 compute-1 ovn_controller[132966]: 2025-11-24T10:04:05Z|00101|binding|INFO|Releasing lport c9e1a544-3313-45b9-9f1e-5b8ba7d7cc61 from this chassis (sb_readonly=0)
Nov 24 10:04:05 compute-1 NetworkManager[48870]: <info>  [1763978645.9818] manager: (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Nov 24 10:04:05 compute-1 nova_compute[230010]: 2025-11-24 10:04:05.982 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:05 compute-1 NetworkManager[48870]: <info>  [1763978645.9831] manager: (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Nov 24 10:04:06 compute-1 ovn_controller[132966]: 2025-11-24T10:04:06Z|00102|binding|INFO|Releasing lport c9e1a544-3313-45b9-9f1e-5b8ba7d7cc61 from this chassis (sb_readonly=0)
Nov 24 10:04:06 compute-1 nova_compute[230010]: 2025-11-24 10:04:06.031 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:06 compute-1 nova_compute[230010]: 2025-11-24 10:04:06.035 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:06 compute-1 nova_compute[230010]: 2025-11-24 10:04:06.219 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:06 compute-1 ceph-mon[80009]: pgmap v1120: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:04:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:06.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:07 compute-1 nova_compute[230010]: 2025-11-24 10:04:07.015 230014 DEBUG nova.compute.manager [req-cdd32b15-607b-42c2-bbf6-2ba39a51950a req-741c86e8-cb6b-4ee2-af48-064f18e82056 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Received event network-changed-891e7944-832b-408f-b645-6f51de733021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:04:07 compute-1 nova_compute[230010]: 2025-11-24 10:04:07.015 230014 DEBUG nova.compute.manager [req-cdd32b15-607b-42c2-bbf6-2ba39a51950a req-741c86e8-cb6b-4ee2-af48-064f18e82056 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Refreshing instance network info cache due to event network-changed-891e7944-832b-408f-b645-6f51de733021. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 10:04:07 compute-1 nova_compute[230010]: 2025-11-24 10:04:07.016 230014 DEBUG oslo_concurrency.lockutils [req-cdd32b15-607b-42c2-bbf6-2ba39a51950a req-741c86e8-cb6b-4ee2-af48-064f18e82056 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:04:07 compute-1 nova_compute[230010]: 2025-11-24 10:04:07.016 230014 DEBUG oslo_concurrency.lockutils [req-cdd32b15-607b-42c2-bbf6-2ba39a51950a req-741c86e8-cb6b-4ee2-af48-064f18e82056 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:04:07 compute-1 nova_compute[230010]: 2025-11-24 10:04:07.016 230014 DEBUG nova.network.neutron [req-cdd32b15-607b-42c2-bbf6-2ba39a51950a req-741c86e8-cb6b-4ee2-af48-064f18e82056 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Refreshing network info cache for port 891e7944-832b-408f-b645-6f51de733021 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 10:04:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:07.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:07 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:04:07 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:04:07 compute-1 sudo[243845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:04:07 compute-1 sudo[243845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:04:07 compute-1 sudo[243845]: pam_unix(sudo:session): session closed for user root
Nov 24 10:04:08 compute-1 ceph-mon[80009]: pgmap v1121: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 24 10:04:08 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:04:08 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:04:08 compute-1 nova_compute[230010]: 2025-11-24 10:04:08.723 230014 DEBUG nova.network.neutron [req-cdd32b15-607b-42c2-bbf6-2ba39a51950a req-741c86e8-cb6b-4ee2-af48-064f18e82056 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Updated VIF entry in instance network info cache for port 891e7944-832b-408f-b645-6f51de733021. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 10:04:08 compute-1 nova_compute[230010]: 2025-11-24 10:04:08.724 230014 DEBUG nova.network.neutron [req-cdd32b15-607b-42c2-bbf6-2ba39a51950a req-741c86e8-cb6b-4ee2-af48-064f18e82056 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Updating instance_info_cache with network_info: [{"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:04:08 compute-1 nova_compute[230010]: 2025-11-24 10:04:08.738 230014 DEBUG oslo_concurrency.lockutils [req-cdd32b15-607b-42c2-bbf6-2ba39a51950a req-741c86e8-cb6b-4ee2-af48-064f18e82056 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:04:08 compute-1 sudo[243871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:04:08 compute-1 sudo[243871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:04:08 compute-1 sudo[243871]: pam_unix(sudo:session): session closed for user root
Nov 24 10:04:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:08.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:09.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:04:09 compute-1 nova_compute[230010]: 2025-11-24 10:04:09.763 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:10 compute-1 ceph-mon[80009]: pgmap v1122: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Nov 24 10:04:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:10.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:11 compute-1 nova_compute[230010]: 2025-11-24 10:04:11.221 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:11.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:11 compute-1 ceph-mon[80009]: pgmap v1123: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Nov 24 10:04:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:12.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:13.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:14 compute-1 ceph-mon[80009]: pgmap v1124: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Nov 24 10:04:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:04:14 compute-1 nova_compute[230010]: 2025-11-24 10:04:14.766 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:14.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:15 compute-1 ovn_controller[132966]: 2025-11-24T10:04:15Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:16:2d 10.100.0.4
Nov 24 10:04:15 compute-1 ovn_controller[132966]: 2025-11-24T10:04:15Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:16:2d 10.100.0.4
Nov 24 10:04:15 compute-1 podman[243900]: 2025-11-24 10:04:15.31607729 +0000 UTC m=+0.051906312 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Nov 24 10:04:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:15.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:04:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:04:16 compute-1 ceph-mon[80009]: pgmap v1125: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 24 10:04:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:04:16 compute-1 nova_compute[230010]: 2025-11-24 10:04:16.223 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:16.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:17.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:18 compute-1 ceph-mon[80009]: pgmap v1126: 353 pgs: 353 active+clean; 121 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Nov 24 10:04:18 compute-1 nova_compute[230010]: 2025-11-24 10:04:18.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:04:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:18.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:19.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:04:19 compute-1 nova_compute[230010]: 2025-11-24 10:04:19.768 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:20.068 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:04:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:20.068 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:04:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:20.069 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:04:20 compute-1 ceph-mon[80009]: pgmap v1127: 353 pgs: 353 active+clean; 121 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 24 10:04:20 compute-1 nova_compute[230010]: 2025-11-24 10:04:20.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:04:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.003000071s ======
Nov 24 10:04:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:20.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000071s
Nov 24 10:04:21 compute-1 nova_compute[230010]: 2025-11-24 10:04:21.225 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:21 compute-1 podman[243923]: 2025-11-24 10:04:21.337205927 +0000 UTC m=+0.080074352 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 24 10:04:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:21.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:21 compute-1 nova_compute[230010]: 2025-11-24 10:04:21.971 230014 INFO nova.compute.manager [None req-7a0e4815-6f13-4663-834a-22e332dc32d2 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Get console output
Nov 24 10:04:21 compute-1 nova_compute[230010]: 2025-11-24 10:04:21.976 236028 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 24 10:04:22 compute-1 ceph-mon[80009]: pgmap v1128: 353 pgs: 353 active+clean; 121 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 24 10:04:22 compute-1 nova_compute[230010]: 2025-11-24 10:04:22.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:04:22 compute-1 nova_compute[230010]: 2025-11-24 10:04:22.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:04:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:22.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:23 compute-1 ovn_controller[132966]: 2025-11-24T10:04:23Z|00103|binding|INFO|Releasing lport c9e1a544-3313-45b9-9f1e-5b8ba7d7cc61 from this chassis (sb_readonly=0)
Nov 24 10:04:23 compute-1 nova_compute[230010]: 2025-11-24 10:04:23.227 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:23 compute-1 ovn_controller[132966]: 2025-11-24T10:04:23Z|00104|binding|INFO|Releasing lport c9e1a544-3313-45b9-9f1e-5b8ba7d7cc61 from this chassis (sb_readonly=0)
Nov 24 10:04:23 compute-1 nova_compute[230010]: 2025-11-24 10:04:23.284 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:23.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:24 compute-1 ceph-mon[80009]: pgmap v1129: 353 pgs: 353 active+clean; 121 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 24 10:04:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:04:24 compute-1 nova_compute[230010]: 2025-11-24 10:04:24.761 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:04:24 compute-1 nova_compute[230010]: 2025-11-24 10:04:24.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:04:24 compute-1 nova_compute[230010]: 2025-11-24 10:04:24.770 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:24.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:25 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3512625883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:04:25 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/4214854330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:04:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:25.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:25 compute-1 nova_compute[230010]: 2025-11-24 10:04:25.457 230014 INFO nova.compute.manager [None req-e959c7ef-2599-45c9-a479-cadbbe82c683 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Get console output
Nov 24 10:04:25 compute-1 nova_compute[230010]: 2025-11-24 10:04:25.461 236028 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 24 10:04:25 compute-1 nova_compute[230010]: 2025-11-24 10:04:25.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:04:25 compute-1 nova_compute[230010]: 2025-11-24 10:04:25.793 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:04:25 compute-1 nova_compute[230010]: 2025-11-24 10:04:25.793 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:04:25 compute-1 nova_compute[230010]: 2025-11-24 10:04:25.794 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:04:25 compute-1 nova_compute[230010]: 2025-11-24 10:04:25.794 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:04:25 compute-1 nova_compute[230010]: 2025-11-24 10:04:25.794 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:04:26 compute-1 ceph-mon[80009]: pgmap v1130: 353 pgs: 353 active+clean; 121 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 24 10:04:26 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:04:26 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3436723982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:04:26 compute-1 nova_compute[230010]: 2025-11-24 10:04:26.200 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:04:26 compute-1 systemd[1]: Starting dnf makecache...
Nov 24 10:04:26 compute-1 nova_compute[230010]: 2025-11-24 10:04:26.229 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:26 compute-1 nova_compute[230010]: 2025-11-24 10:04:26.268 230014 DEBUG nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 10:04:26 compute-1 nova_compute[230010]: 2025-11-24 10:04:26.269 230014 DEBUG nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 10:04:26 compute-1 nova_compute[230010]: 2025-11-24 10:04:26.401 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:04:26 compute-1 nova_compute[230010]: 2025-11-24 10:04:26.402 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4732MB free_disk=59.942752838134766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:04:26 compute-1 nova_compute[230010]: 2025-11-24 10:04:26.403 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:04:26 compute-1 nova_compute[230010]: 2025-11-24 10:04:26.403 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:04:26 compute-1 dnf[243977]: Metadata cache refreshed recently.
Nov 24 10:04:26 compute-1 nova_compute[230010]: 2025-11-24 10:04:26.480 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Instance 89909dc1-a7db-4cca-b837-5340532de97b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 10:04:26 compute-1 nova_compute[230010]: 2025-11-24 10:04:26.481 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:04:26 compute-1 nova_compute[230010]: 2025-11-24 10:04:26.481 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:04:26 compute-1 nova_compute[230010]: 2025-11-24 10:04:26.504 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing inventories for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 10:04:26 compute-1 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 24 10:04:26 compute-1 systemd[1]: Finished dnf makecache.
Nov 24 10:04:26 compute-1 nova_compute[230010]: 2025-11-24 10:04:26.520 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating ProviderTree inventory for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 10:04:26 compute-1 nova_compute[230010]: 2025-11-24 10:04:26.520 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating inventory in ProviderTree for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 10:04:26 compute-1 nova_compute[230010]: 2025-11-24 10:04:26.537 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing aggregate associations for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 10:04:26 compute-1 nova_compute[230010]: 2025-11-24 10:04:26.558 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing trait associations for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e, traits: COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_F16C,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI2,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 10:04:26 compute-1 nova_compute[230010]: 2025-11-24 10:04:26.591 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:04:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:26.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:04:27 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3125762373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:04:27 compute-1 nova_compute[230010]: 2025-11-24 10:04:27.041 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:04:27 compute-1 nova_compute[230010]: 2025-11-24 10:04:27.048 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:04:27 compute-1 nova_compute[230010]: 2025-11-24 10:04:27.061 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:04:27 compute-1 nova_compute[230010]: 2025-11-24 10:04:27.087 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:04:27 compute-1 nova_compute[230010]: 2025-11-24 10:04:27.088 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:04:27 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3436723982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:04:27 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3125762373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:04:27 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/112966388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:04:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:27.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:28 compute-1 ceph-mon[80009]: pgmap v1131: 353 pgs: 353 active+clean; 121 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 24 10:04:28 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3276436408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:04:28 compute-1 podman[244001]: 2025-11-24 10:04:28.316543196 +0000 UTC m=+0.051676076 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:04:28 compute-1 ovn_controller[132966]: 2025-11-24T10:04:28Z|00105|binding|INFO|Releasing lport c9e1a544-3313-45b9-9f1e-5b8ba7d7cc61 from this chassis (sb_readonly=0)
Nov 24 10:04:28 compute-1 NetworkManager[48870]: <info>  [1763978668.6329] manager: (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Nov 24 10:04:28 compute-1 NetworkManager[48870]: <info>  [1763978668.6335] manager: (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Nov 24 10:04:28 compute-1 nova_compute[230010]: 2025-11-24 10:04:28.632 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:28 compute-1 ovn_controller[132966]: 2025-11-24T10:04:28Z|00106|binding|INFO|Releasing lport c9e1a544-3313-45b9-9f1e-5b8ba7d7cc61 from this chassis (sb_readonly=0)
Nov 24 10:04:28 compute-1 nova_compute[230010]: 2025-11-24 10:04:28.637 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:28.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:28 compute-1 sudo[244021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:04:28 compute-1 sudo[244021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:04:28 compute-1 sudo[244021]: pam_unix(sudo:session): session closed for user root
Nov 24 10:04:28 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:28.897 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 10:04:28 compute-1 nova_compute[230010]: 2025-11-24 10:04:28.898 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:28 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:28.898 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 10:04:28 compute-1 nova_compute[230010]: 2025-11-24 10:04:28.963 230014 INFO nova.compute.manager [None req-3ecc1688-f715-4769-810a-8eb0cb0ced3f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Get console output
Nov 24 10:04:28 compute-1 nova_compute[230010]: 2025-11-24 10:04:28.968 236028 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 24 10:04:29 compute-1 nova_compute[230010]: 2025-11-24 10:04:29.090 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:04:29 compute-1 nova_compute[230010]: 2025-11-24 10:04:29.091 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:04:29 compute-1 nova_compute[230010]: 2025-11-24 10:04:29.091 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:04:29 compute-1 nova_compute[230010]: 2025-11-24 10:04:29.333 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:04:29 compute-1 nova_compute[230010]: 2025-11-24 10:04:29.333 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquired lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:04:29 compute-1 nova_compute[230010]: 2025-11-24 10:04:29.333 230014 DEBUG nova.network.neutron [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 10:04:29 compute-1 nova_compute[230010]: 2025-11-24 10:04:29.334 230014 DEBUG nova.objects.instance [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 89909dc1-a7db-4cca-b837-5340532de97b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 10:04:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:04:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:29.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:04:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:04:29 compute-1 nova_compute[230010]: 2025-11-24 10:04:29.796 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:30 compute-1 ceph-mon[80009]: pgmap v1132: 353 pgs: 353 active+clean; 121 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 15 KiB/s wr, 1 op/s
Nov 24 10:04:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:04:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:04:30 compute-1 nova_compute[230010]: 2025-11-24 10:04:30.798 230014 DEBUG nova.compute.manager [req-620a44d4-6dd9-4cdd-9131-afcba1bf7835 req-d5b13bbf-bb33-4f29-8cb8-18755ebf8350 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Received event network-changed-891e7944-832b-408f-b645-6f51de733021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:04:30 compute-1 nova_compute[230010]: 2025-11-24 10:04:30.798 230014 DEBUG nova.compute.manager [req-620a44d4-6dd9-4cdd-9131-afcba1bf7835 req-d5b13bbf-bb33-4f29-8cb8-18755ebf8350 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Refreshing instance network info cache due to event network-changed-891e7944-832b-408f-b645-6f51de733021. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 10:04:30 compute-1 nova_compute[230010]: 2025-11-24 10:04:30.799 230014 DEBUG oslo_concurrency.lockutils [req-620a44d4-6dd9-4cdd-9131-afcba1bf7835 req-d5b13bbf-bb33-4f29-8cb8-18755ebf8350 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:04:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:04:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:30.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:04:30 compute-1 nova_compute[230010]: 2025-11-24 10:04:30.868 230014 DEBUG oslo_concurrency.lockutils [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "89909dc1-a7db-4cca-b837-5340532de97b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:04:30 compute-1 nova_compute[230010]: 2025-11-24 10:04:30.869 230014 DEBUG oslo_concurrency.lockutils [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:04:30 compute-1 nova_compute[230010]: 2025-11-24 10:04:30.869 230014 DEBUG oslo_concurrency.lockutils [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "89909dc1-a7db-4cca-b837-5340532de97b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:04:30 compute-1 nova_compute[230010]: 2025-11-24 10:04:30.869 230014 DEBUG oslo_concurrency.lockutils [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:04:30 compute-1 nova_compute[230010]: 2025-11-24 10:04:30.869 230014 DEBUG oslo_concurrency.lockutils [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:04:30 compute-1 nova_compute[230010]: 2025-11-24 10:04:30.870 230014 INFO nova.compute.manager [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Terminating instance
Nov 24 10:04:30 compute-1 nova_compute[230010]: 2025-11-24 10:04:30.871 230014 DEBUG nova.compute.manager [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 10:04:30 compute-1 kernel: tap891e7944-83 (unregistering): left promiscuous mode
Nov 24 10:04:30 compute-1 NetworkManager[48870]: <info>  [1763978670.9219] device (tap891e7944-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 10:04:30 compute-1 ovn_controller[132966]: 2025-11-24T10:04:30Z|00107|binding|INFO|Releasing lport 891e7944-832b-408f-b645-6f51de733021 from this chassis (sb_readonly=0)
Nov 24 10:04:30 compute-1 ovn_controller[132966]: 2025-11-24T10:04:30Z|00108|binding|INFO|Setting lport 891e7944-832b-408f-b645-6f51de733021 down in Southbound
Nov 24 10:04:30 compute-1 ovn_controller[132966]: 2025-11-24T10:04:30Z|00109|binding|INFO|Removing iface tap891e7944-83 ovn-installed in OVS
Nov 24 10:04:30 compute-1 nova_compute[230010]: 2025-11-24 10:04:30.967 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:30 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:30.978 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:16:2d 10.100.0.4'], port_security=['fa:16:3e:a8:16:2d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '89909dc1-a7db-4cca-b837-5340532de97b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22748050-40a9-4373-8c95-5da36c909edc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6dece4c3-fa7a-42ae-8b29-e0f3dfabd71c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72482cca-2f03-4eb7-ab95-968e79999420, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=891e7944-832b-408f-b645-6f51de733021) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 10:04:30 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:30.979 142336 INFO neutron.agent.ovn.metadata.agent [-] Port 891e7944-832b-408f-b645-6f51de733021 in datapath 22748050-40a9-4373-8c95-5da36c909edc unbound from our chassis
Nov 24 10:04:30 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:30.980 142336 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 22748050-40a9-4373-8c95-5da36c909edc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 10:04:30 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:30.982 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[9da7dc2f-a60d-4de0-a9af-c316969ef6bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:04:30 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:30.983 142336 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-22748050-40a9-4373-8c95-5da36c909edc namespace which is not needed anymore
Nov 24 10:04:30 compute-1 nova_compute[230010]: 2025-11-24 10:04:30.995 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:31 compute-1 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Nov 24 10:04:31 compute-1 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000d.scope: Consumed 13.772s CPU time.
Nov 24 10:04:31 compute-1 systemd-machined[193537]: Machine qemu-6-instance-0000000d terminated.
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.103 230014 INFO nova.virt.libvirt.driver [-] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Instance destroyed successfully.
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.103 230014 DEBUG nova.objects.instance [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'resources' on Instance uuid 89909dc1-a7db-4cca-b837-5340532de97b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 10:04:31 compute-1 neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc[243827]: [NOTICE]   (243831) : haproxy version is 2.8.14-c23fe91
Nov 24 10:04:31 compute-1 neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc[243827]: [NOTICE]   (243831) : path to executable is /usr/sbin/haproxy
Nov 24 10:04:31 compute-1 neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc[243827]: [WARNING]  (243831) : Exiting Master process...
Nov 24 10:04:31 compute-1 neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc[243827]: [WARNING]  (243831) : Exiting Master process...
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.116 230014 DEBUG nova.virt.libvirt.vif [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T10:03:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1741107609',display_name='tempest-TestNetworkBasicOps-server-1741107609',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1741107609',id=13,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGYO1n2WM+59O3PRTf5fCo1d78/BH3Mc8BBXRdPASueO+JvuIAgEpEuVwsO0rsx8rIXsxHGWMhGFwwjbkrft3uNRj4gBBGDnbQiVDk9hyHkutBhfgKKfMw5qeDHykomezA==',key_name='tempest-TestNetworkBasicOps-1685206173',keypairs=<?>,launch_index=0,launched_at=2025-11-24T10:04:02Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-pxhddr0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T10:04:02Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=89909dc1-a7db-4cca-b837-5340532de97b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 10:04:31 compute-1 neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc[243827]: [ALERT]    (243831) : Current worker (243833) exited with code 143 (Terminated)
Nov 24 10:04:31 compute-1 neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc[243827]: [WARNING]  (243831) : All workers exited. Exiting... (0)
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.117 230014 DEBUG nova.network.os_vif_util [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.118 230014 DEBUG nova.network.os_vif_util [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:16:2d,bridge_name='br-int',has_traffic_filtering=True,id=891e7944-832b-408f-b645-6f51de733021,network=Network(22748050-40a9-4373-8c95-5da36c909edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap891e7944-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.119 230014 DEBUG os_vif [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:16:2d,bridge_name='br-int',has_traffic_filtering=True,id=891e7944-832b-408f-b645-6f51de733021,network=Network(22748050-40a9-4373-8c95-5da36c909edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap891e7944-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 10:04:31 compute-1 systemd[1]: libpod-886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2.scope: Deactivated successfully.
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.121 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.121 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap891e7944-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.123 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:31 compute-1 podman[244070]: 2025-11-24 10:04:31.12675427 +0000 UTC m=+0.046806687 container died 886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.127 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.129 230014 INFO os_vif [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:16:2d,bridge_name='br-int',has_traffic_filtering=True,id=891e7944-832b-408f-b645-6f51de733021,network=Network(22748050-40a9-4373-8c95-5da36c909edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap891e7944-83')
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.148 230014 DEBUG nova.compute.manager [req-9bd28c05-1dcf-4cc5-8150-f9fca2fabe94 req-9b3aba1e-5237-45ae-b54e-120cf64232cd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Received event network-vif-unplugged-891e7944-832b-408f-b645-6f51de733021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.148 230014 DEBUG oslo_concurrency.lockutils [req-9bd28c05-1dcf-4cc5-8150-f9fca2fabe94 req-9b3aba1e-5237-45ae-b54e-120cf64232cd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "89909dc1-a7db-4cca-b837-5340532de97b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.149 230014 DEBUG oslo_concurrency.lockutils [req-9bd28c05-1dcf-4cc5-8150-f9fca2fabe94 req-9b3aba1e-5237-45ae-b54e-120cf64232cd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.149 230014 DEBUG oslo_concurrency.lockutils [req-9bd28c05-1dcf-4cc5-8150-f9fca2fabe94 req-9b3aba1e-5237-45ae-b54e-120cf64232cd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.149 230014 DEBUG nova.compute.manager [req-9bd28c05-1dcf-4cc5-8150-f9fca2fabe94 req-9b3aba1e-5237-45ae-b54e-120cf64232cd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] No waiting events found dispatching network-vif-unplugged-891e7944-832b-408f-b645-6f51de733021 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.150 230014 DEBUG nova.compute.manager [req-9bd28c05-1dcf-4cc5-8150-f9fca2fabe94 req-9b3aba1e-5237-45ae-b54e-120cf64232cd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Received event network-vif-unplugged-891e7944-832b-408f-b645-6f51de733021 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 24 10:04:31 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2-userdata-shm.mount: Deactivated successfully.
Nov 24 10:04:31 compute-1 systemd[1]: var-lib-containers-storage-overlay-19ce70e599b070ad1e348a0dc736d83aebd11dabe2621d209a81daa70e66a1ce-merged.mount: Deactivated successfully.
Nov 24 10:04:31 compute-1 podman[244070]: 2025-11-24 10:04:31.175190806 +0000 UTC m=+0.095243203 container cleanup 886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 24 10:04:31 compute-1 systemd[1]: libpod-conmon-886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2.scope: Deactivated successfully.
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.229 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:31 compute-1 podman[244132]: 2025-11-24 10:04:31.243769705 +0000 UTC m=+0.043103656 container remove 886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 10:04:31 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:31.249 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[e74213ae-93d5-4dde-b45e-0116d185efe2]: (4, ('Mon Nov 24 10:04:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc (886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2)\n886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2\nMon Nov 24 10:04:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc (886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2)\n886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:04:31 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:31.251 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[c94712ac-c6a5-4fbb-8b79-42f5c5e7b6f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:04:31 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:31.252 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22748050-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:04:31 compute-1 kernel: tap22748050-40: left promiscuous mode
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.255 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.275 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:31 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:31.275 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[c5df8033-b70a-4f39-a3a2-b7e3deec0db9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:04:31 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:31.294 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[13c8dfee-6e9c-456e-b6d4-dae604f6fb4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:04:31 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:31.295 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[d56784b5-a2f0-4d82-ba2c-40ce6e4ec5b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:04:31 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:31.313 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[17d43f6e-d6b5-46d2-b5d2-870e21a49dc1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458530, 'reachable_time': 36289, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244147, 'error': None, 'target': 'ovnmeta-22748050-40a9-4373-8c95-5da36c909edc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:04:31 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:31.317 142476 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-22748050-40a9-4373-8c95-5da36c909edc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 24 10:04:31 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:31.317 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[4f2d138a-1168-4ce7-ad3f-5d5681f1f30c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:04:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:04:31 compute-1 systemd[1]: run-netns-ovnmeta\x2d22748050\x2d40a9\x2d4373\x2d8c95\x2d5da36c909edc.mount: Deactivated successfully.
Nov 24 10:04:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:31.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.562 230014 INFO nova.virt.libvirt.driver [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Deleting instance files /var/lib/nova/instances/89909dc1-a7db-4cca-b837-5340532de97b_del
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.563 230014 INFO nova.virt.libvirt.driver [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Deletion of /var/lib/nova/instances/89909dc1-a7db-4cca-b837-5340532de97b_del complete
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.606 230014 INFO nova.compute.manager [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Took 0.73 seconds to destroy the instance on the hypervisor.
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.607 230014 DEBUG oslo.service.loopingcall [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.608 230014 DEBUG nova.compute.manager [-] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 10:04:31 compute-1 nova_compute[230010]: 2025-11-24 10:04:31.608 230014 DEBUG nova.network.neutron [-] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 10:04:32 compute-1 nova_compute[230010]: 2025-11-24 10:04:32.138 230014 DEBUG nova.network.neutron [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Updating instance_info_cache with network_info: [{"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:04:32 compute-1 nova_compute[230010]: 2025-11-24 10:04:32.160 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Releasing lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:04:32 compute-1 nova_compute[230010]: 2025-11-24 10:04:32.160 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 10:04:32 compute-1 nova_compute[230010]: 2025-11-24 10:04:32.160 230014 DEBUG oslo_concurrency.lockutils [req-620a44d4-6dd9-4cdd-9131-afcba1bf7835 req-d5b13bbf-bb33-4f29-8cb8-18755ebf8350 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:04:32 compute-1 nova_compute[230010]: 2025-11-24 10:04:32.161 230014 DEBUG nova.network.neutron [req-620a44d4-6dd9-4cdd-9131-afcba1bf7835 req-d5b13bbf-bb33-4f29-8cb8-18755ebf8350 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Refreshing network info cache for port 891e7944-832b-408f-b645-6f51de733021 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 10:04:32 compute-1 nova_compute[230010]: 2025-11-24 10:04:32.162 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:04:32 compute-1 nova_compute[230010]: 2025-11-24 10:04:32.163 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:04:32 compute-1 ceph-mon[80009]: pgmap v1133: 353 pgs: 353 active+clean; 121 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 15 KiB/s wr, 1 op/s
Nov 24 10:04:32 compute-1 nova_compute[230010]: 2025-11-24 10:04:32.560 230014 DEBUG nova.network.neutron [-] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:04:32 compute-1 nova_compute[230010]: 2025-11-24 10:04:32.574 230014 INFO nova.compute.manager [-] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Took 0.97 seconds to deallocate network for instance.
Nov 24 10:04:32 compute-1 nova_compute[230010]: 2025-11-24 10:04:32.630 230014 DEBUG oslo_concurrency.lockutils [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:04:32 compute-1 nova_compute[230010]: 2025-11-24 10:04:32.630 230014 DEBUG oslo_concurrency.lockutils [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:04:32 compute-1 nova_compute[230010]: 2025-11-24 10:04:32.678 230014 DEBUG oslo_concurrency.processutils [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:04:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:32.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:32 compute-1 nova_compute[230010]: 2025-11-24 10:04:32.863 230014 DEBUG nova.compute.manager [req-564521e6-1234-42df-8be5-a41f17fd9be6 req-ad155b29-92b1-4627-a276-30d711280658 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Received event network-vif-deleted-891e7944-832b-408f-b645-6f51de733021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:04:33 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:04:33 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1074205317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:04:33 compute-1 nova_compute[230010]: 2025-11-24 10:04:33.095 230014 DEBUG oslo_concurrency.processutils [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:04:33 compute-1 nova_compute[230010]: 2025-11-24 10:04:33.101 230014 DEBUG nova.compute.provider_tree [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:04:33 compute-1 nova_compute[230010]: 2025-11-24 10:04:33.115 230014 DEBUG nova.scheduler.client.report [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:04:33 compute-1 nova_compute[230010]: 2025-11-24 10:04:33.131 230014 DEBUG oslo_concurrency.lockutils [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.501s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:04:33 compute-1 nova_compute[230010]: 2025-11-24 10:04:33.156 230014 INFO nova.scheduler.client.report [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Deleted allocations for instance 89909dc1-a7db-4cca-b837-5340532de97b
Nov 24 10:04:33 compute-1 nova_compute[230010]: 2025-11-24 10:04:33.219 230014 DEBUG oslo_concurrency.lockutils [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.350s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:04:33 compute-1 nova_compute[230010]: 2025-11-24 10:04:33.229 230014 DEBUG nova.compute.manager [req-9c677dfe-8c27-480e-9edb-46adbc8b5ab1 req-b6442491-89f0-45d6-822c-e5e36b026280 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Received event network-vif-plugged-891e7944-832b-408f-b645-6f51de733021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:04:33 compute-1 nova_compute[230010]: 2025-11-24 10:04:33.229 230014 DEBUG oslo_concurrency.lockutils [req-9c677dfe-8c27-480e-9edb-46adbc8b5ab1 req-b6442491-89f0-45d6-822c-e5e36b026280 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "89909dc1-a7db-4cca-b837-5340532de97b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:04:33 compute-1 nova_compute[230010]: 2025-11-24 10:04:33.230 230014 DEBUG oslo_concurrency.lockutils [req-9c677dfe-8c27-480e-9edb-46adbc8b5ab1 req-b6442491-89f0-45d6-822c-e5e36b026280 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:04:33 compute-1 nova_compute[230010]: 2025-11-24 10:04:33.230 230014 DEBUG oslo_concurrency.lockutils [req-9c677dfe-8c27-480e-9edb-46adbc8b5ab1 req-b6442491-89f0-45d6-822c-e5e36b026280 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:04:33 compute-1 nova_compute[230010]: 2025-11-24 10:04:33.230 230014 DEBUG nova.compute.manager [req-9c677dfe-8c27-480e-9edb-46adbc8b5ab1 req-b6442491-89f0-45d6-822c-e5e36b026280 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] No waiting events found dispatching network-vif-plugged-891e7944-832b-408f-b645-6f51de733021 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:04:33 compute-1 nova_compute[230010]: 2025-11-24 10:04:33.231 230014 WARNING nova.compute.manager [req-9c677dfe-8c27-480e-9edb-46adbc8b5ab1 req-b6442491-89f0-45d6-822c-e5e36b026280 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Received unexpected event network-vif-plugged-891e7944-832b-408f-b645-6f51de733021 for instance with vm_state deleted and task_state None.
Nov 24 10:04:33 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1074205317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:04:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:04:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:33.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:04:33 compute-1 nova_compute[230010]: 2025-11-24 10:04:33.572 230014 DEBUG nova.network.neutron [req-620a44d4-6dd9-4cdd-9131-afcba1bf7835 req-d5b13bbf-bb33-4f29-8cb8-18755ebf8350 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Updated VIF entry in instance network info cache for port 891e7944-832b-408f-b645-6f51de733021. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 10:04:33 compute-1 nova_compute[230010]: 2025-11-24 10:04:33.572 230014 DEBUG nova.network.neutron [req-620a44d4-6dd9-4cdd-9131-afcba1bf7835 req-d5b13bbf-bb33-4f29-8cb8-18755ebf8350 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Updating instance_info_cache with network_info: [{"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:04:33 compute-1 nova_compute[230010]: 2025-11-24 10:04:33.585 230014 DEBUG oslo_concurrency.lockutils [req-620a44d4-6dd9-4cdd-9131-afcba1bf7835 req-d5b13bbf-bb33-4f29-8cb8-18755ebf8350 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:04:33 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:04:33.900 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:04:34 compute-1 ceph-mon[80009]: pgmap v1134: 353 pgs: 353 active+clean; 43 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 15 KiB/s wr, 15 op/s
Nov 24 10:04:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:04:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:34.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:35.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:36 compute-1 nova_compute[230010]: 2025-11-24 10:04:36.125 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:36 compute-1 nova_compute[230010]: 2025-11-24 10:04:36.233 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:36 compute-1 ceph-mon[80009]: pgmap v1135: 353 pgs: 353 active+clean; 43 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 9.9 KiB/s rd, 2.7 KiB/s wr, 14 op/s
Nov 24 10:04:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:04:36 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 10:04:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:36.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:04:36 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 10:04:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:04:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:37.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:04:37 compute-1 nova_compute[230010]: 2025-11-24 10:04:37.696 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:37 compute-1 nova_compute[230010]: 2025-11-24 10:04:37.768 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:38 compute-1 ceph-mon[80009]: pgmap v1136: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.5 KiB/s wr, 29 op/s
Nov 24 10:04:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:38.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:39.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:04:40 compute-1 ceph-mon[80009]: pgmap v1137: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 24 10:04:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:04:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:40.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:04:41 compute-1 nova_compute[230010]: 2025-11-24 10:04:41.130 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:41 compute-1 nova_compute[230010]: 2025-11-24 10:04:41.268 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:41.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:42 compute-1 ceph-mon[80009]: pgmap v1138: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 24 10:04:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:04:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:42.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:04:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:43.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:44 compute-1 ceph-mon[80009]: pgmap v1139: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 24 10:04:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:04:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:44.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:04:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:04:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:45.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:45 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:04:46 compute-1 nova_compute[230010]: 2025-11-24 10:04:46.103 230014 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763978671.1009011, 89909dc1-a7db-4cca-b837-5340532de97b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:04:46 compute-1 nova_compute[230010]: 2025-11-24 10:04:46.103 230014 INFO nova.compute.manager [-] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] VM Stopped (Lifecycle Event)
Nov 24 10:04:46 compute-1 nova_compute[230010]: 2025-11-24 10:04:46.117 230014 DEBUG nova.compute.manager [None req-586f4402-9266-4b2a-9077-f3e71a877705 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:04:46 compute-1 nova_compute[230010]: 2025-11-24 10:04:46.132 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:46 compute-1 nova_compute[230010]: 2025-11-24 10:04:46.270 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:46 compute-1 podman[244181]: 2025-11-24 10:04:46.326774272 +0000 UTC m=+0.059798595 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 24 10:04:46 compute-1 ceph-mon[80009]: pgmap v1140: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 767 B/s wr, 14 op/s
Nov 24 10:04:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:46.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:47.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:48 compute-1 ceph-mon[80009]: pgmap v1141: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 767 B/s wr, 15 op/s
Nov 24 10:04:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:48.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:48 compute-1 sudo[244203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:04:48 compute-1 sudo[244203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:04:48 compute-1 sudo[244203]: pam_unix(sudo:session): session closed for user root
Nov 24 10:04:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:49.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:04:50 compute-1 ceph-mon[80009]: pgmap v1142: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:04:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:04:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:50.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:04:51 compute-1 nova_compute[230010]: 2025-11-24 10:04:51.136 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:51 compute-1 nova_compute[230010]: 2025-11-24 10:04:51.271 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:51.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:52 compute-1 podman[244230]: 2025-11-24 10:04:52.336174352 +0000 UTC m=+0.076582847 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 24 10:04:52 compute-1 ceph-mon[80009]: pgmap v1143: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:04:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:52.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:53.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:53 compute-1 ceph-mon[80009]: pgmap v1144: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:04:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:04:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:04:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:54.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:04:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:55.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:56 compute-1 ceph-mon[80009]: pgmap v1145: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:04:56 compute-1 nova_compute[230010]: 2025-11-24 10:04:56.137 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:56 compute-1 nova_compute[230010]: 2025-11-24 10:04:56.273 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:56.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:57.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:58 compute-1 ceph-mon[80009]: pgmap v1146: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:04:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:04:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:58.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:04:59 compute-1 podman[244259]: 2025-11-24 10:04:59.374827684 +0000 UTC m=+0.080095662 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 24 10:04:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:04:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:59.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:05:00 compute-1 ceph-mon[80009]: pgmap v1147: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:05:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:05:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:00.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:05:01 compute-1 nova_compute[230010]: 2025-11-24 10:05:01.178 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:01 compute-1 nova_compute[230010]: 2025-11-24 10:05:01.277 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:05:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:01.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:05:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 10:05:01 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2840377285' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:05:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 10:05:01 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2840377285' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:05:02 compute-1 ceph-mon[80009]: pgmap v1148: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/2840377285' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:05:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/2840377285' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:05:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:02.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:03.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:04 compute-1 ceph-mon[80009]: pgmap v1149: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:05:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:05:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:04.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:05:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:05.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:06 compute-1 ceph-mon[80009]: pgmap v1150: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:06 compute-1 nova_compute[230010]: 2025-11-24 10:05:06.180 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:06 compute-1 nova_compute[230010]: 2025-11-24 10:05:06.278 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:06.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:07.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:07 compute-1 sudo[244285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:05:07 compute-1 sudo[244285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:05:07 compute-1 sudo[244285]: pam_unix(sudo:session): session closed for user root
Nov 24 10:05:07 compute-1 sudo[244310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:05:07 compute-1 sudo[244310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:05:08 compute-1 ceph-mon[80009]: pgmap v1151: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:08 compute-1 sudo[244310]: pam_unix(sudo:session): session closed for user root
Nov 24 10:05:08 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:05:08 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:05:08 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 10:05:08 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:05:08 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:05:08 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:05:08 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 10:05:08 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:05:08 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 10:05:08 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:05:08 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:05:08 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:05:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:08.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:09 compute-1 sudo[244369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:05:09 compute-1 sudo[244369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:05:09 compute-1 sudo[244369]: pam_unix(sudo:session): session closed for user root
Nov 24 10:05:09 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:05:09 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:05:09 compute-1 ceph-mon[80009]: pgmap v1152: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:05:09 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:05:09 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:05:09 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:05:09 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:05:09 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:05:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:09.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:05:09 compute-1 ovn_controller[132966]: 2025-11-24T10:05:09Z|00110|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Nov 24 10:05:10 compute-1 ceph-mon[80009]: pgmap v1153: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:05:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:05:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:10.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:05:11 compute-1 nova_compute[230010]: 2025-11-24 10:05:11.184 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:11 compute-1 nova_compute[230010]: 2025-11-24 10:05:11.281 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:11.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:12 compute-1 ceph-mon[80009]: pgmap v1154: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:12.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:13 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:05:13 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:05:13 compute-1 sudo[244397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:05:13 compute-1 sudo[244397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:05:13 compute-1 sudo[244397]: pam_unix(sudo:session): session closed for user root
Nov 24 10:05:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:05:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:13.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:05:14 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:05:14 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:05:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:05:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:14.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:15 compute-1 ceph-mon[80009]: pgmap v1155: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:05:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:05:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:05:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:15.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:16 compute-1 nova_compute[230010]: 2025-11-24 10:05:16.188 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:05:16 compute-1 nova_compute[230010]: 2025-11-24 10:05:16.283 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:05:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:16.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:05:17 compute-1 ceph-mon[80009]: pgmap v1156: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:17 compute-1 podman[244424]: 2025-11-24 10:05:17.344338041 +0000 UTC m=+0.075687864 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 10:05:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:17.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:18 compute-1 ceph-mon[80009]: pgmap v1157: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:05:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:18.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:19.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:05:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:05:20.069 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:05:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:05:20.070 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:05:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:05:20.070 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:05:20 compute-1 ceph-mon[80009]: pgmap v1158: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:20 compute-1 nova_compute[230010]: 2025-11-24 10:05:20.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:05:20 compute-1 nova_compute[230010]: 2025-11-24 10:05:20.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:05:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:20.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:21 compute-1 nova_compute[230010]: 2025-11-24 10:05:21.192 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:21 compute-1 nova_compute[230010]: 2025-11-24 10:05:21.286 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:21.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:22 compute-1 ceph-mon[80009]: pgmap v1159: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:22.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:23 compute-1 podman[244448]: 2025-11-24 10:05:23.327360395 +0000 UTC m=+0.069999980 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 24 10:05:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:23.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:23 compute-1 nova_compute[230010]: 2025-11-24 10:05:23.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:05:23 compute-1 nova_compute[230010]: 2025-11-24 10:05:23.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:05:24 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2711357868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:05:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:05:24 compute-1 nova_compute[230010]: 2025-11-24 10:05:24.760 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:05:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:24.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:25 compute-1 ceph-mon[80009]: pgmap v1160: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:25 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/11465984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:05:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:25.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:26 compute-1 nova_compute[230010]: 2025-11-24 10:05:26.195 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:26 compute-1 nova_compute[230010]: 2025-11-24 10:05:26.287 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:26 compute-1 nova_compute[230010]: 2025-11-24 10:05:26.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:05:26 compute-1 ceph-mon[80009]: pgmap v1161: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:26.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:27.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:27 compute-1 nova_compute[230010]: 2025-11-24 10:05:27.759 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:05:27 compute-1 nova_compute[230010]: 2025-11-24 10:05:27.776 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:05:27 compute-1 nova_compute[230010]: 2025-11-24 10:05:27.794 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:05:27 compute-1 nova_compute[230010]: 2025-11-24 10:05:27.795 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:05:27 compute-1 nova_compute[230010]: 2025-11-24 10:05:27.795 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:05:27 compute-1 nova_compute[230010]: 2025-11-24 10:05:27.795 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:05:27 compute-1 nova_compute[230010]: 2025-11-24 10:05:27.795 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:05:27 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3828311499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:05:28 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:05:28 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/479419961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:05:28 compute-1 nova_compute[230010]: 2025-11-24 10:05:28.250 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:05:28 compute-1 nova_compute[230010]: 2025-11-24 10:05:28.415 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:05:28 compute-1 nova_compute[230010]: 2025-11-24 10:05:28.416 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4923MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:05:28 compute-1 nova_compute[230010]: 2025-11-24 10:05:28.416 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:05:28 compute-1 nova_compute[230010]: 2025-11-24 10:05:28.417 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:05:28 compute-1 nova_compute[230010]: 2025-11-24 10:05:28.479 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:05:28 compute-1 nova_compute[230010]: 2025-11-24 10:05:28.479 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:05:28 compute-1 nova_compute[230010]: 2025-11-24 10:05:28.511 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:05:28 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3440100558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:05:28 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/479419961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:05:28 compute-1 ceph-mon[80009]: pgmap v1162: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:28.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:28 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:05:28 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1358056965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:05:28 compute-1 nova_compute[230010]: 2025-11-24 10:05:28.971 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:05:28 compute-1 nova_compute[230010]: 2025-11-24 10:05:28.976 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:05:28 compute-1 nova_compute[230010]: 2025-11-24 10:05:28.988 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:05:29 compute-1 nova_compute[230010]: 2025-11-24 10:05:29.036 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:05:29 compute-1 nova_compute[230010]: 2025-11-24 10:05:29.036 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:05:29 compute-1 sudo[244522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:05:29 compute-1 sudo[244522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:05:29 compute-1 sudo[244522]: pam_unix(sudo:session): session closed for user root
Nov 24 10:05:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:29.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:05:29 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1358056965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:05:30 compute-1 nova_compute[230010]: 2025-11-24 10:05:30.025 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:05:30 compute-1 nova_compute[230010]: 2025-11-24 10:05:30.026 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:05:30 compute-1 nova_compute[230010]: 2025-11-24 10:05:30.026 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:05:30 compute-1 nova_compute[230010]: 2025-11-24 10:05:30.042 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:05:30 compute-1 nova_compute[230010]: 2025-11-24 10:05:30.042 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:05:30 compute-1 nova_compute[230010]: 2025-11-24 10:05:30.043 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:05:30 compute-1 podman[244548]: 2025-11-24 10:05:30.33336738 +0000 UTC m=+0.074526752 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 24 10:05:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:05:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:05:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:05:30 compute-1 ceph-mon[80009]: pgmap v1163: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:30.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:31 compute-1 nova_compute[230010]: 2025-11-24 10:05:31.199 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:31 compute-1 nova_compute[230010]: 2025-11-24 10:05:31.288 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:31.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:32 compute-1 ceph-mon[80009]: pgmap v1164: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:32.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:33.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:05:34 compute-1 ceph-mon[80009]: pgmap v1165: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:34.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:35 compute-1 sshd-session[244569]: Invalid user ethereum from 80.94.92.165 port 53326
Nov 24 10:05:35 compute-1 sshd-session[244569]: Connection closed by invalid user ethereum 80.94.92.165 port 53326 [preauth]
Nov 24 10:05:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:35.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:36 compute-1 nova_compute[230010]: 2025-11-24 10:05:36.214 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:36 compute-1 nova_compute[230010]: 2025-11-24 10:05:36.290 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:36 compute-1 ceph-mon[80009]: pgmap v1166: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:36.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:37.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:38 compute-1 ceph-mon[80009]: pgmap v1167: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:38.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:05:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:39.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:40 compute-1 ceph-mon[80009]: pgmap v1168: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:40.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:41 compute-1 nova_compute[230010]: 2025-11-24 10:05:41.219 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:41 compute-1 nova_compute[230010]: 2025-11-24 10:05:41.294 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:41.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:41 compute-1 sshd-session[244574]: Accepted publickey for zuul from 192.168.122.10 port 46982 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 10:05:41 compute-1 systemd-logind[823]: New session 55 of user zuul.
Nov 24 10:05:42 compute-1 systemd[1]: Started Session 55 of User zuul.
Nov 24 10:05:42 compute-1 sshd-session[244574]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 10:05:42 compute-1 sudo[244579]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 24 10:05:42 compute-1 sudo[244579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 10:05:42 compute-1 ceph-mon[80009]: pgmap v1169: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:42.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:43.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:43 compute-1 sshd-session[244643]: Connection closed by authenticating user root 164.92.213.168 port 54322 [preauth]
Nov 24 10:05:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:05:44 compute-1 ceph-mon[80009]: from='client.17061 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:44 compute-1 ceph-mon[80009]: from='client.26591 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:44 compute-1 ceph-mon[80009]: pgmap v1170: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:44.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:05:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:05:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 24 10:05:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2646850891' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 10:05:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:45.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:45 compute-1 ceph-mon[80009]: from='client.25333 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:45 compute-1 ceph-mon[80009]: from='client.26597 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:45 compute-1 ceph-mon[80009]: from='client.17079 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:45 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:05:45 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2646850891' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 10:05:45 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1767787726' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 10:05:45 compute-1 ceph-mon[80009]: from='client.25339 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:46 compute-1 nova_compute[230010]: 2025-11-24 10:05:46.221 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:46 compute-1 nova_compute[230010]: 2025-11-24 10:05:46.296 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:46 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/4057720917' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 10:05:46 compute-1 ceph-mon[80009]: pgmap v1171: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:46.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:47.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:47 compute-1 podman[244885]: 2025-11-24 10:05:47.766364281 +0000 UTC m=+0.062377992 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 24 10:05:48 compute-1 ovs-vsctl[244933]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 24 10:05:48 compute-1 ceph-mon[80009]: pgmap v1172: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:48.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:49 compute-1 sudo[244976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:05:49 compute-1 sudo[244976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:05:49 compute-1 sudo[244976]: pam_unix(sudo:session): session closed for user root
Nov 24 10:05:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:05:49 compute-1 virtqemud[229578]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 24 10:05:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:49.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:49 compute-1 virtqemud[229578]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 24 10:05:49 compute-1 virtqemud[229578]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 24 10:05:50 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: cache status {prefix=cache status} (starting...)
Nov 24 10:05:50 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 10:05:50 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: client ls {prefix=client ls} (starting...)
Nov 24 10:05:50 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 10:05:50 compute-1 lvm[245310]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 10:05:50 compute-1 lvm[245310]: VG ceph_vg0 finished
Nov 24 10:05:50 compute-1 ceph-mon[80009]: pgmap v1173: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:50 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: damage ls {prefix=damage ls} (starting...)
Nov 24 10:05:50 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 10:05:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:50.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:51 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: dump loads {prefix=dump loads} (starting...)
Nov 24 10:05:51 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 10:05:51 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 24 10:05:51 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/805920003' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:05:51 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 24 10:05:51 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 10:05:51 compute-1 nova_compute[230010]: 2025-11-24 10:05:51.227 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:51 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 24 10:05:51 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 10:05:51 compute-1 nova_compute[230010]: 2025-11-24 10:05:51.297 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:51 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 24 10:05:51 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 10:05:51 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:05:51 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/467161652' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:05:51 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 24 10:05:51 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 10:05:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:51.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:51 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 24 10:05:51 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 10:05:51 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Nov 24 10:05:51 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2099222001' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 10:05:51 compute-1 ceph-mon[80009]: from='client.26618 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:51 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/805920003' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:05:51 compute-1 ceph-mon[80009]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:05:51 compute-1 ceph-mon[80009]: from='client.26630 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:51 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/467161652' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:05:51 compute-1 ceph-mon[80009]: from='client.25348 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:51 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 24 10:05:51 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 10:05:52 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: ops {prefix=ops} (starting...)
Nov 24 10:05:52 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 10:05:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Nov 24 10:05:52 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1395852554' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 10:05:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Nov 24 10:05:52 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2236574839' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 10:05:52 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2099222001' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 10:05:52 compute-1 ceph-mon[80009]: from='client.26654 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:52 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1395852554' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 10:05:52 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2236574839' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 10:05:52 compute-1 ceph-mon[80009]: pgmap v1174: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:52 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: session ls {prefix=session ls} (starting...)
Nov 24 10:05:52 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 10:05:52 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 24 10:05:52 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/464524726' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:05:52 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: status {prefix=status} (starting...)
Nov 24 10:05:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:52.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:53 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 24 10:05:53 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2385770739' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:05:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:53.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:53 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 24 10:05:53 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2194257639' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:05:53 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 24 10:05:53 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/832786423' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:05:53 compute-1 ceph-mon[80009]: from='client.26672 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:53 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/464524726' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:05:53 compute-1 ceph-mon[80009]: from='client.26684 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:53 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2385770739' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:05:53 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2194257639' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:05:53 compute-1 ceph-mon[80009]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:05:53 compute-1 ceph-mon[80009]: from='client.17139 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:53 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/832786423' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:05:53 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2624586791' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:05:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Nov 24 10:05:54 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2100678030' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 10:05:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 24 10:05:54 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1728751067' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:05:54 compute-1 podman[245849]: 2025-11-24 10:05:54.362584083 +0000 UTC m=+0.091883290 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 10:05:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Nov 24 10:05:54 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1555652975' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 10:05:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:05:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Nov 24 10:05:54 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/422130285' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 10:05:54 compute-1 ceph-mon[80009]: from='client.25372 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:54 compute-1 ceph-mon[80009]: from='client.17154 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:54 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3262176359' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:05:54 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2100678030' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 10:05:54 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1728751067' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:05:54 compute-1 ceph-mon[80009]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:05:54 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3738133278' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:05:54 compute-1 ceph-mon[80009]: from='client.25387 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:54 compute-1 ceph-mon[80009]: from='client.17178 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:54 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1555652975' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 10:05:54 compute-1 ceph-mon[80009]: from='client.26744 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:54 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1643640683' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:05:54 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/422130285' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 10:05:54 compute-1 ceph-mon[80009]: pgmap v1175: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Nov 24 10:05:54 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/833142917' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 10:05:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:54.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:55 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 24 10:05:55 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4157313813' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:05:55 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Nov 24 10:05:55 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1743021929' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 10:05:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:05:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:55.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:05:55 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 24 10:05:55 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2180658346' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:05:56 compute-1 nova_compute[230010]: 2025-11-24 10:05:56.231 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:56 compute-1 ceph-mon[80009]: from='client.25405 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:56 compute-1 ceph-mon[80009]: from='client.17193 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:56 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1729941568' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 10:05:56 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/833142917' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 10:05:56 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1251301519' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 10:05:56 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/4157313813' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:05:56 compute-1 ceph-mon[80009]: from='client.25417 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:56 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1072016513' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 10:05:56 compute-1 ceph-mon[80009]: from='client.17220 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:56 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2500613712' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 10:05:56 compute-1 ceph-mon[80009]: from='client.26810 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:56 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1743021929' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 10:05:56 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3155060605' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 10:05:56 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 24 10:05:56 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/419866527' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:05:56 compute-1 nova_compute[230010]: 2025-11-24 10:05:56.300 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1032192 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:03.609133+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1032192 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:04.609338+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 1024000 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:05.609469+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 1024000 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:06.609603+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 1024000 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:07.609796+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1015808 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:08.609965+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1015808 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:09.610084+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 1007616 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:10.610237+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 999424 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:11.610384+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 999424 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:12.610573+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:13.610728+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 999424 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:14.610873+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 991232 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:15.611030+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 991232 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:16.611199+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 991232 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:17.611363+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 983040 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:18.611518+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 983040 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:19.611665+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 983040 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:20.611830+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 974848 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:21.611962+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 966656 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:22.612082+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 958464 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:23.612204+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 958464 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:24.612337+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 958464 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:25.612483+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 950272 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:26.612678+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 933888 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:27.612905+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 925696 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:28.613023+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 925696 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:29.613154+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 917504 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:30.613322+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 917504 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:31.613580+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 909312 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:32.613717+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 909312 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:33.614967+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 909312 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:34.615570+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 901120 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:35.615701+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 901120 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:36.615858+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 892928 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:37.616099+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 892928 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:38.616233+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 884736 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:39.616356+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 884736 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:40.616502+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 884736 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:41.616664+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 876544 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:42.616880+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 876544 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:43.617077+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 876544 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:44.617252+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 876544 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:45.617382+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 868352 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:46.617469+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 860160 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:47.617687+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 851968 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:48.617857+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 851968 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:49.618066+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 851968 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bd23a800 session 0x5634bf08bc20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:50.618217+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 843776 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:51.618463+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 843776 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:52.618688+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 843776 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:53.618836+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 843776 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:54.618991+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 835584 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:55.619144+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 835584 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:56.619334+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 827392 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:57.619483+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 827392 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:58.619615+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 827392 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:59.619773+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 819200 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:00.619928+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 819200 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:01.620126+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 811008 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:02.620278+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 811008 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:03.620563+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 811008 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:04.620818+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 802816 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:05.620936+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 802816 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:06.621082+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 802816 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:07.621279+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 794624 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:08.621455+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 794624 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 101.126350403s of 101.215919495s, submitted: 3
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:09.621610+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 794624 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949333 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:10.621734+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 786432 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:11.621889+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 778240 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:12.622032+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 770048 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:13.622174+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 770048 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:14.622347+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 761856 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949333 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:15.622498+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 761856 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:16.622640+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 761856 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:17.622807+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 753664 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:18.623022+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 753664 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:19.623157+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 753664 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949333 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:20.623308+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 745472 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:21.623519+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 745472 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:22.623655+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 745472 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:23.623837+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 737280 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:24.623989+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 737280 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949333 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:25.624147+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 729088 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:26.624534+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 729088 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:27.625498+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 729088 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:28.625735+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 720896 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:29.626173+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 720896 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949333 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:30.626530+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 720896 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:31.627201+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 704512 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:32.627376+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 704512 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be106400 session 0x5634bef0de00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:33.627605+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 704512 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:34.627810+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 696320 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949333 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:35.628016+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 696320 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:36.628190+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 696320 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:37.628346+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 688128 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:38.628510+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 688128 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:39.628646+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 688128 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949333 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:40.628787+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 679936 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:41.628968+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 679936 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:42.629106+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 671744 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:43.629281+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 671744 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:44.629466+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 671744 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949333 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:45.629589+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 663552 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:46.629763+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 663552 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.788032532s of 37.799560547s, submitted: 2
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf0fc000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:47.630499+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 647168 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:48.631461+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 647168 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:49.631627+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 638976 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:50.631760+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 638976 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:51.631886+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 638976 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:52.632005+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 630784 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:53.632145+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 630784 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:54.632285+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 630784 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:55.632455+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 622592 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:56.632575+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 614400 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:57.632745+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 614400 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:58.632897+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 606208 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:59.633013+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 606208 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:00.633128+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 606208 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:01.633273+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 606208 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:02.633358+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 598016 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:03.633473+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 598016 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:04.633601+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 598016 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:05.633743+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 589824 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:06.633881+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 589824 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:07.634042+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 581632 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:08.634200+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 581632 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:09.634310+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 573440 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:10.634476+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 565248 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:11.634604+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 565248 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:12.634845+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 557056 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:13.635061+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 557056 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:14.635279+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 557056 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:15.635432+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 548864 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:16.635629+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 540672 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:17.635857+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 540672 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:18.636021+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 532480 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:19.636183+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 532480 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:20.636314+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 532480 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:21.636489+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 524288 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:22.636631+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 524288 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:23.636814+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 524288 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:24.637505+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 516096 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:25.637653+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 516096 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:26.637780+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 507904 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:27.637961+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 507904 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:28.638097+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 499712 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:29.638534+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 499712 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:30.639076+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 499712 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:31.639854+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 491520 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:32.640085+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 491520 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:33.640284+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 491520 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:34.640526+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be106800 session 0x5634bfe2e000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 491520 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:35.640684+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 483328 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:36.640846+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 483328 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:37.641602+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 475136 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:38.641801+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 475136 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:39.641960+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 466944 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:40.642093+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 466944 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:41.642222+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 458752 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:42.642368+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 450560 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:43.642513+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 450560 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:44.642642+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 450560 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:45.642759+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 442368 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:46.642909+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 442368 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:47.643068+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 442368 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:48.643237+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 434176 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:49.643387+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 434176 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:50.643616+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 425984 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:51.644589+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bd23a800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 425984 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:52.644809+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 425984 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:53.645041+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 417792 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:54.645343+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 67.314422607s of 67.320625305s, submitted: 2
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 417792 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:55.645721+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951766 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 417792 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:56.646081+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 409600 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:57.646478+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 409600 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:58.647159+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 401408 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:59.647896+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 401408 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:00.648044+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 401408 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:01.648280+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 393216 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:02.648470+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 385024 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 6424 writes, 26K keys, 6424 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6424 writes, 1124 syncs, 5.72 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6424 writes, 26K keys, 6424 commit groups, 1.0 writes per commit group, ingest: 19.84 MB, 0.03 MB/s
                                           Interval WAL: 6424 writes, 1124 syncs, 5.72 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:03.648617+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 327680 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:04.648778+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 319488 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:05.648933+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 319488 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:06.649107+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 319488 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:07.649303+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 311296 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:08.649452+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 311296 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:09.649585+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 303104 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:10.649776+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 303104 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:11.649922+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 303104 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:12.650071+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 294912 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:13.650268+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 294912 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:14.650444+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 294912 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:15.650613+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 286720 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:16.650772+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 286720 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:17.650961+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 286720 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:18.651104+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 278528 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:19.651356+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 278528 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:20.651571+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 278528 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:21.651788+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 270336 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:22.652076+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 270336 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:23.652256+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 262144 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:24.652508+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 262144 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:25.652684+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 262144 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:26.652882+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 245760 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:27.653064+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 245760 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:28.653224+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 237568 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:29.653488+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 237568 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:30.653713+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 229376 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:31.653965+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 229376 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:32.654589+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 229376 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:33.655563+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 221184 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:34.655793+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 221184 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:35.655952+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 212992 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:36.656092+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 212992 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:37.657322+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 204800 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:38.658887+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 204800 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:39.660110+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 204800 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:40.660511+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 196608 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:41.661556+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 196608 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:42.661889+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 188416 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:43.662203+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 188416 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:44.662456+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 188416 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:45.662723+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 172032 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:46.662948+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 172032 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:47.663183+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 163840 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:48.663385+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 155648 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:49.663654+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 155648 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:50.663905+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 147456 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:51.664067+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 147456 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:52.664256+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 147456 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:53.664486+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 139264 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:54.664687+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 139264 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:55.664929+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 139264 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:56.665144+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 131072 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:57.665338+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 131072 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:58.665561+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 122880 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:59.665698+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 122880 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:00.665831+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 122880 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:01.666043+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 114688 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:02.666169+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 114688 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:03.666330+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 114688 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:04.666532+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 106496 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:05.666683+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 106496 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:06.666841+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 106496 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:07.667231+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 98304 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:08.667371+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 98304 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:09.667752+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 98304 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:10.668093+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 90112 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:11.668346+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 90112 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:12.668612+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 81920 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:13.668777+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 81920 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:14.669006+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 73728 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:15.669169+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 73728 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:16.669391+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 73728 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:17.670796+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 65536 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:18.670934+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 65536 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:19.671073+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 57344 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:20.671506+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 86.433448792s of 86.441337585s, submitted: 2
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951262 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 40960 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:21.671609+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 1032192 heap: 77635584 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:22.671740+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 1769472 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:23.671866+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 1761280 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:24.672007+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 1761280 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:25.672151+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1744896 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:26.672517+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1744896 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:27.672761+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1744896 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:28.672914+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bd23a800 session 0x5634bf1daf00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1744896 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:29.673060+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1744896 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:30.673221+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1744896 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:31.673387+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1744896 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:32.673805+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1744896 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:33.674119+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1744896 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:34.674264+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1744896 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:35.674392+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 1728512 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:36.674619+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 1728512 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:37.674803+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 1728512 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:38.674985+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 1753088 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:39.675140+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 1753088 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:40.675281+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 1753088 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:41.675421+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1744896 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:42.675550+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:43.675677+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1744896 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:44.675881+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 1728512 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.445825577s of 24.319128036s, submitted: 234
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:45.676032+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 1728512 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952687 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:46.676178+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 1720320 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:47.676359+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 1720320 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:48.676504+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 1712128 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:49.676673+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 1712128 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:50.676854+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 1712128 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952687 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:51.677010+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 1703936 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:52.677238+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 1703936 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:53.677457+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 1687552 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:54.677691+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 1687552 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:55.677877+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 1687552 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952687 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:56.678022+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 1679360 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:57.678206+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 1679360 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:58.678388+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 1679360 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:59.678634+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 1671168 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:00.678809+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 1662976 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952687 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:01.678960+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 1654784 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:02.679163+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 1654784 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:03.679360+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 1646592 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:04.679482+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 1646592 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:05.679625+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 1646592 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952687 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:06.679750+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 1646592 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:07.679958+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 1646592 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:08.680079+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.313213348s of 23.316928864s, submitted: 1
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:09.680284+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:10.680506+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:11.680671+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:12.680845+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:13.681041+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:14.681220+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:15.681357+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:16.681548+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:17.681849+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:18.681984+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:19.682109+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:20.682290+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:21.682434+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:22.682576+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:23.682709+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:24.682855+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:25.683007+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:26.683163+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:27.683327+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:28.683458+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:29.683598+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:30.683741+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:31.683879+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:32.684020+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:33.684157+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:34.684290+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:35.684467+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:36.684676+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:37.684877+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:38.685019+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:39.685254+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:40.685430+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:41.685581+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:42.685715+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:43.685928+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:44.686105+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:45.686312+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:46.686607+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:47.686864+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bf0fc000 session 0x5634bfcb3a40
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:48.687147+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:49.687305+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:50.687608+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:51.687765+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:52.687919+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:53.688112+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:54.688325+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:55.688529+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:56.688767+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:57.689005+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:58.689237+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:59.689462+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:00.689622+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:01.689840+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:02.690018+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:03.690174+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:04.690323+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:05.690487+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:06.690698+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:07.690869+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:08.691023+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:09.691235+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:10.691472+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:11.691594+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:12.692504+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:13.692707+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:14.693034+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:15.693307+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:16.693542+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:17.693737+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:18.693960+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:19.694160+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:20.694391+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:21.694573+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:22.694725+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:23.694912+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:24.695171+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:25.695331+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:26.695501+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:27.695684+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:28.695819+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:29.695934+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:30.696077+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:31.696292+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:32.696576+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:33.696722+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:34.696864+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:35.697011+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:36.697160+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:37.697388+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:38.697464+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:39.697622+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:40.697781+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:41.697928+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:42.698068+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:43.698272+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:44.698473+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:45.698626+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:46.698780+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:47.698955+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:48.699135+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:49.699292+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:50.699426+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:51.699577+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:52.699753+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:53.699917+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:54.700109+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:55.700276+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:56.700417+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:57.700630+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:58.700771+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:59.700921+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:00.701080+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:01.701257+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:02.701485+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:03.701627+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:04.701760+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:05.701900+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:06.702039+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:07.702218+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:08.702440+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:09.702606+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:10.702799+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:11.702985+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:12.703179+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:13.703299+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:14.703538+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:15.703757+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:16.703922+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:17.704066+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:18.704301+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:19.704451+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:20.704573+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:21.704704+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:22.704820+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:23.705096+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1605632 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:24.705256+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:25.705509+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:26.705674+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:27.705887+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:28.706067+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:29.706253+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be106400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 141.261001587s of 141.271926880s, submitted: 1
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:30.706438+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:31.706627+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953608 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:32.706781+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:33.706950+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:34.707104+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:35.707252+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:36.707549+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953608 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:37.707746+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:38.707954+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:39.708116+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:40.708273+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:41.708478+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953608 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:42.708620+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:43.708775+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:44.709031+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:45.709246+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:46.709487+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953608 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:47.709723+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:48.709922+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:49.710132+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:50.710351+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:51.710559+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953608 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:52.710730+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:53.710891+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:54.711085+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:55.711241+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:56.711312+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953608 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:57.711484+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:58.711636+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:59.711746+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:00.711889+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:01.712026+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953608 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:02.712156+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:03.712284+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be106400 session 0x5634bf4563c0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:04.712461+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:05.712635+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:06.712877+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953608 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:07.713022+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:08.713150+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:09.713285+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:10.713442+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:11.713592+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953608 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:12.713729+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:13.713902+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be106800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 44.565235138s of 44.568458557s, submitted: 1
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:14.713994+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:15.714109+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1581056 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:16.714238+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1581056 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955120 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:17.714455+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1581056 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:18.714614+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1581056 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:19.714759+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1581056 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:20.714898+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1581056 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:21.715033+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1581056 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955120 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:22.715206+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1581056 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:23.715339+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 1572864 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:24.715457+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 1564672 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:25.715617+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 1564672 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:26.715866+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 1564672 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956632 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:27.716282+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 1564672 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:28.716462+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 1564672 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.729745865s of 14.739072800s, submitted: 2
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:29.716652+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79216640 unmapped: 516096 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:30.716784+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79216640 unmapped: 516096 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:31.716902+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79216640 unmapped: 516096 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956041 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:32.717057+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79216640 unmapped: 516096 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:33.717283+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79224832 unmapped: 507904 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:34.717532+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79224832 unmapped: 507904 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:35.717720+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79224832 unmapped: 507904 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:36.717876+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956041 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:37.718050+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:38.718251+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:39.718428+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:40.718554+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:41.718702+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956041 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:42.718939+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:43.719093+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:44.719273+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:45.719455+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:46.719580+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956041 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:47.719764+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:48.719970+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:49.720107+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:50.720282+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:51.720424+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be106800 session 0x5634bd461c20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956041 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:52.720604+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:53.720722+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:54.721563+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:55.721729+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:56.721936+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956041 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:57.722148+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:58.722292+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:59.722420+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:00.722549+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:01.722667+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956041 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:02.722788+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:03.722934+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:04.723089+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:05.723213+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:06.723368+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956041 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:07.723633+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bee53000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.088268280s of 39.090908051s, submitted: 1
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:08.723817+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:09.724002+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:10.724165+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:11.724305+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957553 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:12.724444+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:13.724604+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:14.724806+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:15.724938+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:16.725058+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957553 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:17.725244+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:18.725458+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:19.725611+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:20.725737+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.436481476s of 12.442902565s, submitted: 1
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:21.725866+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958474 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:22.725998+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:23.726186+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:24.726555+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:25.726799+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:26.726953+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 475136 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:27.727141+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 475136 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:28.727309+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 475136 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:29.727452+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 475136 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:30.727580+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 475136 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:31.727768+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 475136 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:32.727935+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 475136 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:33.728102+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 475136 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:34.728260+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 475136 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:35.728384+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 475136 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:36.728542+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:37.728774+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:38.728872+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:39.728998+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:40.729173+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:41.729303+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:42.729468+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:43.729604+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:44.729792+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:45.729933+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:46.730050+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:47.730218+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:48.730355+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:49.730482+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:50.730647+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:51.730767+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:52.730925+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:53.731049+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:54.731205+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:55.731367+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:56.731523+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:57.731679+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:58.731841+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:59.731980+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be107400 session 0x5634bef0c5a0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bd23a800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:00.732161+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:01.732333+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:02.732455+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:03.732642+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bee53000 session 0x5634bf225e00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:04.732802+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:05.732986+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:06.733125+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:07.733333+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:08.733473+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:09.733602+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:10.733889+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:11.734110+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:12.734244+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:13.734486+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:14.734704+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:15.734947+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:16.735156+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 409600 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be106400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 56.697479248s of 56.708225250s, submitted: 3
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:17.735498+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 409600 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:18.735641+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 409600 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:19.735796+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 409600 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be106800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:20.735965+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 393216 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:21.736106+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 393216 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960907 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:22.736267+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 393216 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:23.736564+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 393216 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:24.736773+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:25.736911+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:26.737092+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:27.737296+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:28.737447+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:29.737629+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:30.737785+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:31.737909+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:32.738057+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:33.738195+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:34.738335+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:35.738540+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:36.738697+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:37.739533+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:38.739665+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:39.740085+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:40.742047+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:41.742262+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:42.744551+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:43.746073+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:44.746285+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:45.746458+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be106800 session 0x5634bfe22d20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:46.746592+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:47.746779+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:48.746908+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:49.747125+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:50.747474+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:51.747632+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:52.747818+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 360448 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:53.748225+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 360448 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:54.748416+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 360448 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:55.748596+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 360448 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:56.748751+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:57.749038+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:58.749442+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:59.749596+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:00.749760+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:01.749876+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be107400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:02.750109+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:03.750309+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:04.750449+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 48.040000916s of 48.052692413s, submitted: 3
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:05.750594+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:06.750741+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959725 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:07.750947+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:08.751139+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:09.751242+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:10.751380+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:11.751517+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:12.751638+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:13.751784+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:14.751969+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:15.752189+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:16.752366+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:17.752639+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be106400 session 0x5634bef0cb40
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:18.752818+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:19.752986+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:20.753122+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:21.753292+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:22.753526+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:23.753713+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:24.753899+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:25.754016+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:26.754177+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:27.754359+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:28.754561+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:29.754720+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:30.754853+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:31.754957+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:32.755081+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:33.755217+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:34.755350+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:35.755500+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 294912 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:36.755667+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 294912 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:37.755898+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 294912 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:38.756025+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 294912 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:39.756177+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 294912 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:40.756327+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 294912 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:41.756468+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 286720 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:42.756608+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 286720 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:43.756741+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 286720 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:44.756862+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:45.757025+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:46.757234+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:47.757451+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:48.757646+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:49.757799+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:50.757911+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:51.758023+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:52.758141+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:53.758267+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:54.758390+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:55.758582+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:56.758705+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:57.758898+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:58.759024+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:59.759173+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:00.759313+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:01.759450+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 56.977294922s of 56.983356476s, submitted: 2
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:02.759580+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:03.759708+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:04.759899+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:05.760067+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:06.760267+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:07.760471+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:08.760605+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:09.760780+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:10.760944+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:11.761058+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:12.761205+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:13.761352+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:14.761472+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:15.761619+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:16.761760+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:17.761908+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:18.762243+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:19.762413+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:20.762533+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:21.762671+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:22.762829+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:23.762979+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:24.763128+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:25.763246+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be107400 session 0x5634c029ba40
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:26.763357+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:27.763524+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:28.763707+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:29.763864+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:30.784740+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:31.784855+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:32.784985+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:33.785118+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:34.785235+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:35.785336+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:36.785482+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 229376 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:37.785712+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 229376 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:38.785869+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 229376 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf0fc000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 36.226074219s of 36.229869843s, submitted: 1
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:39.786055+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 212992 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:40.786199+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 212992 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:41.786460+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 212992 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf0fc400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:42.786607+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf727000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963670 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:43.786718+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:44.786836+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:45.787016+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bf0fc000 session 0x5634bd20be00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:46.787157+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:47.787299+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963670 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:48.787439+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.313743591s of 10.320786476s, submitted: 2
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:49.787562+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:50.787679+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:51.787841+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:52.787976+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963079 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:53.788102+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:54.788237+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:55.788376+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bf727000 session 0x5634c0304f00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:56.788547+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:57.788700+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963079 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:58.788811+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:59.788953+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.823747635s of 10.826331139s, submitted: 1
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:00.789060+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:01.789172+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:02.789273+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be106400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964591 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 6909 writes, 27K keys, 6909 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6909 writes, 1355 syncs, 5.10 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 485 writes, 766 keys, 485 commit groups, 1.0 writes per commit group, ingest: 0.25 MB, 0.00 MB/s
                                           Interval WAL: 485 writes, 231 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:03.789393+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:04.789520+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:05.789646+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:06.789807+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:07.789936+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964591 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:08.790055+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:09.790148+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.365579605s of 10.368579865s, submitted: 1
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:10.790262+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:11.790426+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:12.790574+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964000 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:13.790686+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be106800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:14.790792+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:15.790898+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bf0fc400 session 0x5634bd030780
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:16.791015+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:17.791170+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964921 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:18.791332+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:19.791694+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:20.791815+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:21.791996+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:22.792142+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964921 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:23.792345+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:24.792715+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:25.793621+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:26.793770+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:27.794103+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964921 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:28.794621+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:29.795583+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:30.795712+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:31.795860+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:32.795977+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964921 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:33.796106+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.830619812s of 23.935253143s, submitted: 3
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:34.796439+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:35.796557+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:36.796714+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:37.796905+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966433 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:38.797037+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:39.797218+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:40.797550+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:41.797753+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:42.797916+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:43.798116+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:44.798326+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:45.798508+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:46.798692+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:47.798923+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:48.799087+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:49.799461+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:50.799623+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:51.799743+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:52.799891+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:53.800063+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:54.800303+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:55.800450+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:56.800599+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:57.800758+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:58.800955+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:59.801076+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:00.801237+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:01.801330+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:02.801454+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:03.801580+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:04.801712+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:05.801809+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread fragmentation_score=0.000026 took=0.000038s
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:06.801945+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:07.802092+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:08.802252+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:09.802385+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:10.802512+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:11.802665+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:12.802820+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:13.802950+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:14.803073+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:15.803225+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:16.803393+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 73728 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:17.803585+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 73728 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:18.803729+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 73728 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:19.803896+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 73728 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:20.804026+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 73728 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 47.325916290s of 47.341407776s, submitted: 2
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:21.804150+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 49152 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:22.804313+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [0,0,1])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 1843200 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:23.804551+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1671168 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:24.825851+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1671168 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:25.826918+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:26.827865+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:27.828557+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:28.829220+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:29.829595+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:30.829826+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:31.830003+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:32.830193+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:33.830511+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1654784 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:34.830705+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1654784 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:35.831384+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1654784 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:36.831928+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:37.832273+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:38.832522+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:39.832991+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:40.833470+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:41.833833+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:42.834108+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:43.834251+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1638400 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:44.834525+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1638400 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:45.834686+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1638400 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:46.834984+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1638400 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:47.835247+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1638400 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:48.835423+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1638400 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:49.835646+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 1630208 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:50.835851+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 1630208 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:51.836043+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 1630208 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:52.836225+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1622016 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:53.836386+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1622016 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:54.836556+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.360378265s of 33.329280853s, submitted: 257
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1622016 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:55.836973+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1613824 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:56.837435+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1613824 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:57.837820+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1613824 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965251 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:58.838172+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1613824 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:59.838502+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1613824 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:00.838785+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1613824 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:01.839051+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:02.839333+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965251 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:03.839572+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:04.839736+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:05.839984+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:06.840214+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:07.840470+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965251 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be106800 session 0x5634c0305c20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:08.840669+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:09.840822+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:10.841230+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:11.841421+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:12.841599+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965251 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:13.841834+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:14.841998+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:15.842214+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:16.842447+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:17.842632+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965251 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:18.842786+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:19.842941+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:20.843129+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1581056 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:21.843281+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1581056 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:22.843426+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1581056 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965251 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:23.843631+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1581056 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:24.843844+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1581056 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be107400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.545049667s of 30.567733765s, submitted: 1
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:25.844061+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:26.844281+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:27.844396+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968275 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:28.844568+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:29.844729+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:30.844858+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:31.844999+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:32.845147+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969787 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:33.845289+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:34.845468+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:35.845614+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:36.845752+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.074189186s of 12.084068298s, submitted: 3
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:37.845919+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969196 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:38.846046+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 1556480 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:39.846151+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 1556480 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:40.846283+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1540096 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:41.846436+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1540096 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:42.846556+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1540096 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969196 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:43.846686+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1540096 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:44.846834+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1540096 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:45.847005+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 1531904 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:46.847142+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 1531904 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:47.847501+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 1531904 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969196 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:48.847897+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 1523712 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:49.848133+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:50.848295+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:51.848477+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:52.848604+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969196 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:53.848758+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:54.848928+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:55.849142+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:56.849305+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:57.849471+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969196 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:58.849606+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:59.849971+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:00.850604+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:01.851208+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:02.851650+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969196 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:03.852087+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.175689697s of 27.178615570s, submitted: 1
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:04.852455+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:05.852748+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:06.852993+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:07.853300+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:08.853513+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970708 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:09.853726+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:10.853866+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:11.854048+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:12.854223+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:13.854382+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969526 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:14.854565+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:15.854704+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:16.854930+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:17.855195+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be106400 session 0x5634c029b0e0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:18.855384+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969526 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:19.855883+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:20.856304+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:21.856662+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:22.856878+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:23.857053+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969526 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:24.857337+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:25.857661+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:26.857913+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:27.858166+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969526 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:29.570987+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:30.571220+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:31.571502+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:32.572674+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:33.572833+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969526 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:34.572952+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bee53000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.582208633s of 30.591884613s, submitted: 3
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:35.573085+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:36.573253+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:37.573363+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:38.573563+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971038 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:39.573693+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:40.573814+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:41.573959+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:42.574079+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:43.574232+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:44.574350+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:45.574442+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:46.574575+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:47.574864+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:48.575099+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:49.575250+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:50.575553+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:51.575734+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:52.575904+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:53.576039+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:54.576154+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:55.576332+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:56.576507+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:57.576626+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:58.576802+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:59.576933+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:00.577125+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:01.577251+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:02.577473+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:03.577631+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:04.577770+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:05.577916+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:06.578107+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:07.578237+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:08.578369+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:09.578498+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:10.578633+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:11.578759+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:12.578936+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:13.579117+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:14.579262+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.724964142s of 39.735435486s, submitted: 2
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:15.579390+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:16.579534+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:17.579651+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:18.579869+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:19.580061+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:20.580204+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:21.580388+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1458176 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:22.580555+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1458176 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:23.580702+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1458176 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:24.580824+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1458176 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:25.580944+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:26.581099+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:27.581229+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:28.581461+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:29.581583+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:30.581700+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:31.581888+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:32.582053+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:33.582207+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:34.582366+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:35.582507+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:36.582658+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:37.582796+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:38.582965+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:39.583084+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:40.583243+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:41.583502+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:42.583642+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:43.583836+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:44.583959+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:45.584117+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:46.584244+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:47.584391+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:48.584791+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:49.584913+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:50.585087+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:51.585233+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:52.585478+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:53.585650+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:54.585790+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:55.585951+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:56.586142+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:57.586301+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:58.586474+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:59.586591+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:00.586719+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:01.586960+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:02.587131+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:03.587287+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:04.587485+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:05.587705+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:06.587830+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:07.587984+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:08.588165+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:09.588448+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:10.589303+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:11.589818+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:12.591571+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:13.591843+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:14.592494+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:15.592636+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:16.593130+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:17.593318+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:18.593509+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:19.593693+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:20.593826+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:21.593970+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:22.594539+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:23.594679+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:24.594889+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:25.595156+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:26.595421+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:27.595575+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:28.595759+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:29.595920+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bee53000 session 0x5634c055af00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:30.596113+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:31.596243+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:32.596365+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:33.596493+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:34.596630+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:35.596890+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:36.597140+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:37.597261+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:38.597452+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:39.597575+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:40.597713+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:41.597791+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:42.597926+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:43.598055+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 88.969955444s of 88.974121094s, submitted: 1
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971368 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:44.598162+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:45.598361+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:46.598504+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:47.598608+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:48.598800+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971368 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:49.598871+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:50.598991+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:51.599106+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:52.599245+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:53.599394+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:54.599578+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971368 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:55.599693+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:56.599825+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:57.599950+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:58.600126+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:59.600193+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971368 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:00.600332+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:01.600453+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:02.600563+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:03.600673+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:04.600769+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971368 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:05.600897+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:06.600973+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:07.601096+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be106400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be107400 session 0x5634c06b1680
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1343488 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:08.601255+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.761932373s of 24.764957428s, submitted: 1
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 294912 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:09.601367+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975134 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x179901/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:10.601498+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 16998400 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc1e4000/0x0/0x4ffc00000, data 0x97ba51/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 153 ms_handle_reset con 0x5634be106400 session 0x5634bf7bed20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:11.601627+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 16867328 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be106800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:12.601784+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 16834560 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _renew_subs
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 154 ms_handle_reset con 0x5634be106800 session 0x5634c06e63c0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:13.601907+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:14.602027+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077664 data_alloc: 218103808 data_used: 151552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:15.602169+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:16.602314+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:17.602475+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:18.602632+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:19.602755+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:20.602887+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:21.603035+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:22.603185+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:23.603312+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:24.603469+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf0fc400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.009191513s of 16.191659927s, submitted: 44
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:25.603589+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:26.603732+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:27.603867+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:28.604008+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:29.604139+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:30.604254+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:31.604450+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:32.604597+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:33.604733+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:34.604890+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:35.605028+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:36.605162+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:37.605305+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:38.605625+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:39.605827+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:40.605960+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:41.606104+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:42.606247+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:43.606490+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:44.606620+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:45.606738+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:46.608450+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:47.608711+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:48.609258+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:49.610199+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:50.611051+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:51.611615+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:52.612251+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:53.612925+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:54.613498+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:55.613858+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:56.614160+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf727000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 155 ms_handle_reset con 0x5634bf727000 session 0x5634c0304b40
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf539c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 155 ms_handle_reset con 0x5634bf539c00 session 0x5634bf53bc20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be106400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 155 ms_handle_reset con 0x5634be106400 session 0x5634c06b2780
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:57.614356+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 16785408 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be106800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 155 ms_handle_reset con 0x5634be106800 session 0x5634bfe22000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be107400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:58.614675+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 155 ms_handle_reset con 0x5634be107400 session 0x5634c06b0960
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 93536256 unmapped: 5079040 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf727000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 155 ms_handle_reset con 0x5634bf727000 session 0x5634c06b01e0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf034400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:59.614889+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108216 data_alloc: 234881024 data_used: 11628544
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 93528064 unmapped: 5087232 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _renew_subs
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.016056061s of 35.019523621s, submitted: 1
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:00.615117+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 5062656 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 156 handle_osd_map epochs [156,157], i have 156, src has [1,157]
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _renew_subs
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 156 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 157 ms_handle_reset con 0x5634bf034400 session 0x5634bfdc25a0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf034400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 157 ms_handle_reset con 0x5634bf034400 session 0x5634bfd5ed20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be106400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 157 ms_handle_reset con 0x5634be106400 session 0x5634bf53ba40
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be106800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 157 ms_handle_reset con 0x5634be106800 session 0x5634bf08a3c0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be107400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 157 ms_handle_reset con 0x5634be107400 session 0x5634c06cd860
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:01.615301+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95297536 unmapped: 5488640 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:02.615627+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95297536 unmapped: 5488640 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 157 heartbeat osd_stat(store_statfs(0x4fb8f3000/0x0/0x4ffc00000, data 0x1264eb8/0x1327000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:03.615806+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95297536 unmapped: 5488640 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:04.615982+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151181 data_alloc: 234881024 data_used: 11628544
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95297536 unmapped: 5488640 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _renew_subs
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:05.616111+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95313920 unmapped: 5472256 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f3000/0x0/0x4ffc00000, data 0x1264eb8/0x1327000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:06.616460+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95313920 unmapped: 5472256 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf727000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf727000 session 0x5634c06e6000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:07.616630+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95305728 unmapped: 5480448 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf727000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be106400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:08.617006+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f1000/0x0/0x4ffc00000, data 0x1266e8a/0x132a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95313920 unmapped: 5472256 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:09.617175+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172443 data_alloc: 234881024 data_used: 14356480
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 2924544 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:10.617463+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:11.617633+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f1000/0x0/0x4ffc00000, data 0x1266e8a/0x132a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:12.617976+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:13.618262+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f1000/0x0/0x4ffc00000, data 0x1266e8a/0x132a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:14.618441+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184603 data_alloc: 234881024 data_used: 16195584
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f1000/0x0/0x4ffc00000, data 0x1266e8a/0x132a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:15.618598+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:16.618849+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f1000/0x0/0x4ffc00000, data 0x1266e8a/0x132a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:17.619022+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:18.619227+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:19.619379+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184755 data_alloc: 234881024 data_used: 16199680
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f1000/0x0/0x4ffc00000, data 0x1266e8a/0x132a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.278177261s of 20.407505035s, submitted: 45
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:20.619495+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107520000 unmapped: 2703360 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:21.619648+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 1523712 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:22.619778+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 1523712 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:23.619907+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 1523712 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:24.619996+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263495 data_alloc: 234881024 data_used: 18071552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 1523712 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:25.620108+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 1523712 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9fd3000/0x0/0x4ffc00000, data 0x19e5e8a/0x1aa9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:26.620269+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 1523712 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:27.620428+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107077632 unmapped: 3145728 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:28.620567+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107077632 unmapped: 3145728 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:29.620735+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256551 data_alloc: 234881024 data_used: 18071552
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 3137536 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9fd0000/0x0/0x4ffc00000, data 0x19e8e8a/0x1aac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:30.620833+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9fd0000/0x0/0x4ffc00000, data 0x19e8e8a/0x1aac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 3137536 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:31.620948+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:32.621109+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9fd0000/0x0/0x4ffc00000, data 0x19e8e8a/0x1aac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:33.621212+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:34.621389+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9fd0000/0x0/0x4ffc00000, data 0x19e8e8a/0x1aac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257615 data_alloc: 234881024 data_used: 18145280
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.662779808s of 14.812747955s, submitted: 82
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:35.621586+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:36.621699+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:37.621812+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:38.621958+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:39.622103+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257839 data_alloc: 234881024 data_used: 18145280
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9fcf000/0x0/0x4ffc00000, data 0x19e9e8a/0x1aad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:40.622203+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:41.622333+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:42.622501+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 3088384 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:43.622652+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634c06b2f00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf224d20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be0fb800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634bfcb32c0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 3104768 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634c06b23c0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0688000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:44.622793+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0688000 session 0x5634bf53ab40
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257991 data_alloc: 234881024 data_used: 18673664
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 1818624 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be0fb800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634be148780
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.993793488s of 10.001093864s, submitted: 2
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bcfceb40
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bcbf9c20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bd20a780
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0688400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:45.622904+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0688400 session 0x5634be1adc20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be0fb800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634bfd5f0e0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108380160 unmapped: 8273920 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f977a000/0x0/0x4ffc00000, data 0x1e2ee8a/0x1ef2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:46.623016+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108380160 unmapped: 8273920 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:47.623110+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108380160 unmapped: 8273920 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf7c41e0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:48.623250+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108380160 unmapped: 8273920 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:49.623394+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bd4cfe00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298504 data_alloc: 234881024 data_used: 18673664
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 8290304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf7be1e0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:50.623565+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0688800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0688800 session 0x5634bf4421e0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107683840 unmapped: 8970240 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be0fb800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:51.623716+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107945984 unmapped: 8708096 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9779000/0x0/0x4ffc00000, data 0x1e2ee99/0x1ef3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:52.623887+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 6242304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9779000/0x0/0x4ffc00000, data 0x1e2ee99/0x1ef3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:53.624378+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 6242304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:54.624519+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323629 data_alloc: 234881024 data_used: 21921792
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 6242304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:55.624676+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 6242304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:56.624859+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 6242304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:57.625004+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 6242304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:58.625255+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9779000/0x0/0x4ffc00000, data 0x1e2ee99/0x1ef3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110419968 unmapped: 6234112 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:59.625382+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323629 data_alloc: 234881024 data_used: 21921792
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110419968 unmapped: 6234112 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9779000/0x0/0x4ffc00000, data 0x1e2ee99/0x1ef3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:00.630225+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 6201344 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:01.630455+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 6201344 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:02.630653+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 6201344 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:03.630775+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.279289246s of 18.403636932s, submitted: 31
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113623040 unmapped: 4358144 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:04.630943+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389915 data_alloc: 234881024 data_used: 22224896
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8f55000/0x0/0x4ffc00000, data 0x264ae99/0x270f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,1,1])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 4030464 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:05.631125+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 3948544 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:06.631303+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8f4a000/0x0/0x4ffc00000, data 0x2654e99/0x2719000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 3940352 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:07.631450+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 5496832 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:08.631621+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 5496832 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:09.631790+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1390001 data_alloc: 234881024 data_used: 22290432
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 5496832 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8f52000/0x0/0x4ffc00000, data 0x2654e99/0x2719000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:10.631985+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 5496832 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:11.632138+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634c02cbe00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf08a1e0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110690304 unmapped: 7290880 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf18c780
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:12.632290+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 7528448 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:13.632474+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 7528448 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:14.632622+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270311 data_alloc: 234881024 data_used: 18673664
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 7528448 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:15.632768+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 7528448 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x19e9e8a/0x1aad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:16.633166+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 7528448 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:17.633387+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.905948639s of 14.233164787s, submitted: 109
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf727000 session 0x5634c06b3e00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be106400 session 0x5634bcfe1c20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 7520256 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be0fb800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:18.633732+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634bfe2f680
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 10739712 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:19.633884+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 10739712 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:20.634106+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:21.634346+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:22.634507+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:23.634654+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:24.634805+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:25.634973+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:26.635099+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:27.635224+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:28.635453+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:29.818128+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:30.818270+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:31.818443+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:32.818616+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:33.818732+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:34.818845+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:35.818983+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:36.819112+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:37.819246+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:38.819495+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:39.819621+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:40.819764+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:41.819890+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:42.819989+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:43.820127+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:44.820241+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:45.820363+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:46.820515+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:47.820694+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:48.820900+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:49.821038+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:50.821151+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:51.821321+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf443860
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf443a40
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf727000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf727000 session 0x5634bf442d20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bd462f00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.204719543s of 34.300907135s, submitted: 31
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bd462b40
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107069440 unmapped: 23027712 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be0fb800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634bcfce5a0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bcfcf4a0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bcfce780
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf727000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf727000 session 0x5634c06d7e00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:52.821509+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106790912 unmapped: 23306240 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:53.821651+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106790912 unmapped: 23306240 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:54.821874+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf727000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf727000 session 0x5634bf7c0d20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225465 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106790912 unmapped: 23306240 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be0fb800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634bf7c1680
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:55.822023+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf7c0000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106790912 unmapped: 23306240 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9d28000/0x0/0x4ffc00000, data 0x187eefc/0x1944000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf7c03c0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:56.822228+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 23289856 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0688c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:57.822423+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 23683072 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:58.822635+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 18817024 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:59.822769+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289986 data_alloc: 234881024 data_used: 21778432
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 18808832 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:00.822957+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 18808832 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:01.823115+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9d28000/0x0/0x4ffc00000, data 0x187eefc/0x1944000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 18808832 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:02.823236+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 18808832 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:03.823366+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 18808832 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:04.823471+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289986 data_alloc: 234881024 data_used: 21778432
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 18808832 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:05.823614+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 18800640 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:06.823881+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 18800640 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:07.824025+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9d28000/0x0/0x4ffc00000, data 0x187eefc/0x1944000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 18800640 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:08.824202+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.452789307s of 16.602790833s, submitted: 56
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 13295616 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:09.824360+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1373192 data_alloc: 234881024 data_used: 22908928
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 10567680 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:10.824533+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f949c000/0x0/0x4ffc00000, data 0x2101efc/0x21c7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 10567680 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:11.824701+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 10567680 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:12.824907+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 10567680 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:13.825035+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f949c000/0x0/0x4ffc00000, data 0x2101efc/0x21c7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 10567680 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f949c000/0x0/0x4ffc00000, data 0x2101efc/0x21c7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:14.825178+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1373648 data_alloc: 234881024 data_used: 22921216
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119537664 unmapped: 10559488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:15.825326+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 12402688 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:16.825471+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9484000/0x0/0x4ffc00000, data 0x2122efc/0x21e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 12402688 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:17.825598+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 12402688 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:18.825775+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 12402688 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9484000/0x0/0x4ffc00000, data 0x2122efc/0x21e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:19.825935+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367704 data_alloc: 234881024 data_used: 22933504
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 12402688 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:20.826104+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 12402688 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:21.826230+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:22.826349+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:23.826507+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.744583130s of 15.027527809s, submitted: 96
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9484000/0x0/0x4ffc00000, data 0x2122efc/0x21e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:24.826658+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367784 data_alloc: 234881024 data_used: 22933504
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:25.826820+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f947e000/0x0/0x4ffc00000, data 0x2128efc/0x21ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:26.826985+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:27.827119+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:28.827330+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 12386304 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:29.827474+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f947e000/0x0/0x4ffc00000, data 0x2128efc/0x21ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367784 data_alloc: 234881024 data_used: 22933504
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 12386304 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:30.827623+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 12386304 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:31.827752+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 12369920 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:32.827896+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 12369920 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f947b000/0x0/0x4ffc00000, data 0x212befc/0x21f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:33.828035+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 12369920 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:34.828189+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369392 data_alloc: 234881024 data_used: 23019520
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 12369920 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:35.828564+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 12369920 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f947b000/0x0/0x4ffc00000, data 0x212befc/0x21f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:36.828690+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.042746544s of 13.056042671s, submitted: 4
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 12345344 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:37.828823+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 12345344 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:38.828969+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf1da1e0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0688c00 session 0x5634bd4cef00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf4c7680
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:39.829100+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149888 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:40.829266+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b0000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:41.829486+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:42.829720+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:43.829866+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:44.830057+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b0000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149888 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:45.830201+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:46.830328+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:47.830468+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:48.830693+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:49.830850+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b0000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149888 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:50.831034+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:51.831164+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:52.831359+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:53.831548+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b0000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:54.831703+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149888 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:55.831854+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:56.832018+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.141063690s of 20.330921173s, submitted: 61
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c06b2b40
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634c06b23c0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf727000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf727000 session 0x5634c06b2d20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c06b3680
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bcfcfe00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 21381120 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:57.832177+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 21381120 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:58.832452+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 21381120 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:59.832611+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157691 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 21381120 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:00.832884+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0688c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0688c00 session 0x5634bf7be5a0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 21381120 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:01.833045+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0689800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0689c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 21372928 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:02.833222+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 8425 writes, 31K keys, 8425 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8425 writes, 2022 syncs, 4.17 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1516 writes, 4428 keys, 1516 commit groups, 1.0 writes per commit group, ingest: 4.17 MB, 0.01 MB/s
                                           Interval WAL: 1516 writes, 667 syncs, 2.27 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:03.833370+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:04.833624+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161623 data_alloc: 234881024 data_used: 12693504
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:05.833809+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:06.834003+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:07.834181+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:08.834482+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:09.834697+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161623 data_alloc: 234881024 data_used: 12693504
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:10.834881+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:11.835059+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:12.835461+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:13.835722+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.318338394s of 17.361791611s, submitted: 13
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:14.835901+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109363200 unmapped: 20733952 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213389 data_alloc: 234881024 data_used: 12693504
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:15.836033+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 21151744 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:16.836180+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109002752 unmapped: 21094400 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:17.836336+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109338624 unmapped: 20758528 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:18.836530+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109346816 unmapped: 20750336 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:19.836671+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109346816 unmapped: 20750336 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x1557e8a/0x161b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1220401 data_alloc: 234881024 data_used: 12890112
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:20.836826+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 20766720 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:21.836985+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 20766720 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:22.837199+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109338624 unmapped: 20758528 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa04e000/0x0/0x4ffc00000, data 0x155ae8a/0x161e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:23.837335+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:24.837502+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218617 data_alloc: 234881024 data_used: 12890112
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:25.837651+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:26.837813+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa04e000/0x0/0x4ffc00000, data 0x155ae8a/0x161e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:27.837965+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:28.838191+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.672043800s of 14.786133766s, submitted: 55
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa04e000/0x0/0x4ffc00000, data 0x155ae8a/0x161e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:29.838333+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218857 data_alloc: 234881024 data_used: 12890112
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:30.838468+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109281280 unmapped: 20815872 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:31.839035+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109281280 unmapped: 20815872 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:32.839989+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109281280 unmapped: 20815872 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634beabcc00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634bf08b860
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634bf08a780
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634beabcc00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634bf08ab40
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf08bc20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf08ba40
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0689000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689000 session 0x5634bf53b4a0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0688c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0688c00 session 0x5634bf53ab40
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634beabcc00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:33.840153+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634bf53a1e0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf53ad20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:34.840953+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9bc9000/0x0/0x4ffc00000, data 0x19dee9a/0x1aa3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253835 data_alloc: 234881024 data_used: 12890112
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:35.841168+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9bc9000/0x0/0x4ffc00000, data 0x19dee9a/0x1aa3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:36.841467+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:37.841629+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:38.841849+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9bc9000/0x0/0x4ffc00000, data 0x19dee9a/0x1aa3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:39.841994+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf53af00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253835 data_alloc: 234881024 data_used: 12890112
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0689000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689000 session 0x5634bf53bc20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:40.842115+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3c00 session 0x5634bf53ba40
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634beabcc00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.216509819s of 12.251233101s, submitted: 6
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634c06b0f00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:41.842323+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109617152 unmapped: 20480000 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:42.843022+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109625344 unmapped: 20471808 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:43.843161+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111108096 unmapped: 18989056 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:44.843515+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9ba4000/0x0/0x4ffc00000, data 0x1a02eaa/0x1ac8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291410 data_alloc: 234881024 data_used: 17616896
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:45.843652+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9ba4000/0x0/0x4ffc00000, data 0x1a02eaa/0x1ac8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:46.843819+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9ba4000/0x0/0x4ffc00000, data 0x1a02eaa/0x1ac8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:47.843935+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:48.844328+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:49.844504+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9ba3000/0x0/0x4ffc00000, data 0x1a02eaa/0x1ac8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:50.844690+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291578 data_alloc: 234881024 data_used: 17616896
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:51.844822+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:52.845102+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:53.845229+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.640722275s of 12.708586693s, submitted: 10
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9ba3000/0x0/0x4ffc00000, data 0x1a02eaa/0x1ac8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:54.845352+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 18825216 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:55.845463+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298216 data_alloc: 234881024 data_used: 17735680
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:56.845649+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9b5e000/0x0/0x4ffc00000, data 0x1a48eaa/0x1b0e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:57.845772+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:58.846030+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:59.846160+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:00.846283+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1300012 data_alloc: 234881024 data_used: 17735680
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9b5e000/0x0/0x4ffc00000, data 0x1a48eaa/0x1b0e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:01.846551+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bfe221e0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bdc7c000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:02.846675+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0689000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109428736 unmapped: 20668416 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689000 session 0x5634c029ad20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:03.846816+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 20660224 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa04d000/0x0/0x4ffc00000, data 0x155be8a/0x161f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:04.847049+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 20660224 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa04d000/0x0/0x4ffc00000, data 0x155be8a/0x161f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:05.847363+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224813 data_alloc: 234881024 data_used: 12890112
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 20660224 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:06.847609+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.472726822s of 12.585700035s, submitted: 33
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689800 session 0x5634c06b0b40
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689c00 session 0x5634bf7c1680
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 20660224 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0689800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689800 session 0x5634bd20b2c0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:07.847810+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa04d000/0x0/0x4ffc00000, data 0x155be8a/0x161f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 21446656 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:08.848042+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 21446656 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:09.848234+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 21446656 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:10.848438+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162905 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:11.848601+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:12.848729+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:13.848944+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:14.849227+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:15.849474+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162905 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:16.850009+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:17.850238+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:18.850447+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:19.850719+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:20.850934+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162905 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.771172523s of 14.867496490s, submitted: 30
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:21.851078+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 22495232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:22.851248+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108675072 unmapped: 21422080 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:23.851483+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108904448 unmapped: 21192704 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:24.851699+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109068288 unmapped: 21028864 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:25.851914+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162905 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109076480 unmapped: 21020672 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:26.852088+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109076480 unmapped: 21020672 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:27.852204+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109076480 unmapped: 21020672 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634beabcc00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634bf08b2c0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf7c05a0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bd463e00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634c06b3680
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634beabcc00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634bef0d680
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:28.852375+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109731840 unmapped: 24567808 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:29.852577+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109740032 unmapped: 24559616 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:30.852724+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212735 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109740032 unmapped: 24559616 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:31.852889+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109740032 unmapped: 24559616 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:32.853044+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109740032 unmapped: 24559616 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:33.853232+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109764608 unmapped: 24535040 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bcfe34a0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:34.853416+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0689800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0689c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109780992 unmapped: 24518656 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:35.853677+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212735 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 24141824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:36.853808+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 21692416 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:37.853937+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 21659648 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:38.854107+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 21659648 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:39.854226+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 21626880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:40.854370+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257575 data_alloc: 234881024 data_used: 18759680
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 21626880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:41.854459+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 21626880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:42.854601+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 21626880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:43.854719+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 21626880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:44.855065+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 21626880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:45.855181+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257575 data_alloc: 234881024 data_used: 18759680
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112680960 unmapped: 21618688 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:46.855427+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.294692993s of 25.236562729s, submitted: 260
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119414784 unmapped: 14884864 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:47.855616+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 12951552 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:48.855785+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 12951552 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:49.855978+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121380864 unmapped: 12918784 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:50.856133+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329075 data_alloc: 234881024 data_used: 20541440
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121380864 unmapped: 12918784 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:51.856276+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 12877824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:52.856432+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 12877824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:53.856669+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 12877824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:54.856806+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121454592 unmapped: 12845056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:55.856954+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329075 data_alloc: 234881024 data_used: 20541440
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121454592 unmapped: 12845056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:56.857090+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121454592 unmapped: 12845056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:57.857254+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 12836864 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:58.857444+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 12836864 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:59.857569+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bd23a800 session 0x5634bcfe05a0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0689000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 12836864 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:00.857703+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329379 data_alloc: 234881024 data_used: 20549632
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 12836864 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:01.857906+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 12828672 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:02.858097+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 12828672 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:03.858262+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 12820480 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:04.858470+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 12820480 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:05.858607+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329379 data_alloc: 234881024 data_used: 20549632
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 12820480 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:06.858780+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 12820480 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:07.858974+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 12820480 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:08.859166+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 12820480 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:09.859344+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 12804096 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:10.859484+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330139 data_alloc: 234881024 data_used: 20570112
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 12804096 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:11.859622+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 12804096 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:12.859788+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 12804096 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.572525024s of 26.743309021s, submitted: 82
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:13.859932+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3800 session 0x5634c06e6960
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3000 session 0x5634c06b05a0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634beabcc00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634bf08a1e0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bcfcf860
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bfe23c20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 15040512 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:14.860061+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 15040512 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:15.860184+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350053 data_alloc: 234881024 data_used: 20570112
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 15040512 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:16.860507+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f972c000/0x0/0x4ffc00000, data 0x1e7ce8a/0x1f40000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 15040512 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:17.860704+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 15032320 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:18.860877+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 15032320 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:19.861022+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f972a000/0x0/0x4ffc00000, data 0x1e7de8a/0x1f41000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 15032320 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3800 session 0x5634bfd5e780
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:20.861183+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350693 data_alloc: 234881024 data_used: 20570112
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 15032320 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:21.861336+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 14876672 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:22.861520+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 12206080 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:23.861627+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:24.861817+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:25.861993+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f972a000/0x0/0x4ffc00000, data 0x1e7de8a/0x1f41000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1376837 data_alloc: 234881024 data_used: 24346624
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:26.862146+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:27.862825+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:28.862990+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:29.863136+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:30.863234+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1376837 data_alloc: 234881024 data_used: 24346624
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f972a000/0x0/0x4ffc00000, data 0x1e7de8a/0x1f41000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:31.863385+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.929925919s of 18.974597931s, submitted: 10
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:32.863606+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125714432 unmapped: 8585216 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9393000/0x0/0x4ffc00000, data 0x220de8a/0x22d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:33.863778+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 9084928 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:34.863870+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 9084928 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:35.864070+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412031 data_alloc: 234881024 data_used: 24842240
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 9084928 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:36.864214+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 9052160 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:37.864355+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 9052160 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:38.864613+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 9052160 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:39.864765+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 9052160 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:40.864903+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412047 data_alloc: 234881024 data_used: 24842240
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 9043968 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:41.865090+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 9043968 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:42.865260+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 9043968 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:43.865457+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 9043968 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:44.865594+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 9043968 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:45.865740+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412047 data_alloc: 234881024 data_used: 24842240
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 9035776 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:46.865892+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 9035776 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:47.866111+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 9035776 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:48.866479+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.664098740s of 16.812852859s, submitted: 44
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125419520 unmapped: 8880128 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:49.866626+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125419520 unmapped: 8880128 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:50.866787+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413159 data_alloc: 234881024 data_used: 24825856
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 8871936 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:51.866982+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 8871936 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:52.867108+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 8871936 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:53.867297+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 8871936 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:54.867465+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 8871936 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:55.867604+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413159 data_alloc: 234881024 data_used: 24825856
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 8871936 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:56.867734+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125435904 unmapped: 8863744 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:57.867930+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125444096 unmapped: 8855552 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:58.868127+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125444096 unmapped: 8855552 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:59.868338+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125444096 unmapped: 8855552 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:00.868470+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413159 data_alloc: 234881024 data_used: 24825856
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125444096 unmapped: 8855552 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:01.868627+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125452288 unmapped: 8847360 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:02.868775+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125452288 unmapped: 8847360 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:03.868921+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125452288 unmapped: 8847360 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:04.869121+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.351060867s of 15.361025810s, submitted: 14
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125509632 unmapped: 8790016 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:05.869313+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411143 data_alloc: 234881024 data_used: 24825856
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125509632 unmapped: 8790016 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:06.869500+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 8781824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:07.869700+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 8781824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:08.869916+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 8781824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:09.870099+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 8781824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:10.870301+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411311 data_alloc: 234881024 data_used: 24825856
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 8781824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:11.870480+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 8781824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:12.870626+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125526016 unmapped: 8773632 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:13.870789+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125526016 unmapped: 8773632 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:14.870923+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125526016 unmapped: 8773632 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:15.871059+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: mgrc ms_handle_reset ms_handle_reset con 0x5634bddcfc00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3769522832
Nov 24 10:05:56 compute-1 ceph-osd[77497]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3769522832,v1:192.168.122.100:6801/3769522832]
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: get_auth_request con 0x5634c0688c00 auth_method 0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: mgrc handle_mgr_configure stats_period=5
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411311 data_alloc: 234881024 data_used: 24825856
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 8691712 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:16.871225+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bd1e9000 session 0x5634bf4570e0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf538400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf538800 session 0x5634bfa2bc20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bd1e9000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 8691712 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:17.871390+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 8691712 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:18.871554+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 8691712 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:19.871724+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 8691712 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:20.871873+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411311 data_alloc: 234881024 data_used: 24825856
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 8683520 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:21.872068+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 8683520 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:22.872227+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 8683520 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:23.872366+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125624320 unmapped: 8675328 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:24.873109+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634c06e6d20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf225680
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125624320 unmapped: 8675328 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:25.873632+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.096628189s of 21.114942551s, submitted: 5
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327619 data_alloc: 234881024 data_used: 20619264
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:26.873825+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf82ed20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:27.873977+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:28.874258+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9acb000/0x0/0x4ffc00000, data 0x1adde8a/0x1ba1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:29.874426+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:30.874579+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327619 data_alloc: 234881024 data_used: 20619264
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:31.874821+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:32.874983+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:33.875139+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689800 session 0x5634bcfe32c0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689c00 session 0x5634be1ad680
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bd20b680
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:34.875335+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9acb000/0x0/0x4ffc00000, data 0x1adde8a/0x1ba1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:35.875483+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176425 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:36.875644+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:37.875814+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:38.876033+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:39.876181+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:40.876335+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176425 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:41.876604+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:42.876815+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:43.876982+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:44.877143+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:45.877327+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176425 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:46.877543+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:47.877783+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:48.877973+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:49.878133+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:50.878315+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176425 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:51.878466+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:52.878613+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:53.878791+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:54.878949+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:55.879087+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176425 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:56.879284+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:57.879494+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:58.879675+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:59.879841+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.783638000s of 33.895526886s, submitted: 33
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634c06b2780
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0689800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689800 session 0x5634bf7c4b40
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634bf7c41e0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf7c52c0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634c02e0b40
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:00.879960+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 17965056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1220693 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1390e8a/0x1454000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:01.880106+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 17965056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1390e8a/0x1454000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:02.880271+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 17965056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:03.880453+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 17965056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:04.880787+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 17965056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c02e01e0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf7be960
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:05.880924+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 17948672 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1220693 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0689800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689800 session 0x5634bf7bfc20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:06.881061+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634bf7bf4a0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 17948672 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:07.881228+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 17948672 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1390e8a/0x1454000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:08.881437+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 17358848 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:09.881595+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 17358848 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:10.881762+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 17358848 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1390e8a/0x1454000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260866 data_alloc: 234881024 data_used: 18022400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:11.881986+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 17358848 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634bf82eb40
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.973909378s of 12.041707993s, submitted: 15
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf82fc20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:12.882126+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf7c52c0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:13.882264+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:14.882432+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:15.882586+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181186 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:16.882760+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:17.882892+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:18.883015+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:19.883145+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:20.883279+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181186 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:21.883384+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:22.883510+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:23.883601+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:24.883712+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:25.883815+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181186 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:26.883946+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:27.884111+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:28.884344+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf53a000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0689800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689800 session 0x5634bcfcf860
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf08a1e0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634c06b05a0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.199029922s of 17.423206329s, submitted: 27
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bfcb32c0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634bcfe14a0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3800 session 0x5634c06e72c0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3800 session 0x5634c06e6f00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c029a5a0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:29.884484+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 23748608 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:30.884558+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 23748608 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa1de000/0x0/0x4ffc00000, data 0x13c9e9a/0x148e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226040 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:31.884635+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 23748608 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:32.884796+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 23748608 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf18de00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:33.885019+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bdc7c780
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 23732224 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634bf444000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:34.885169+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c02e1860
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 23724032 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa1de000/0x0/0x4ffc00000, data 0x13c9e9a/0x148e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:35.885337+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 23724032 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1227854 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:36.885558+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 23715840 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa1dd000/0x0/0x4ffc00000, data 0x13c9eaa/0x148f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:37.885737+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 22036480 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:38.886015+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 22036480 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:39.886231+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 22036480 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf82f0e0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf53bc20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf726800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.103581429s of 11.147413254s, submitted: 8
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf726800 session 0x5634bf7c52c0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:40.886452+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186069 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:41.886591+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:42.886726+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:43.886879+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:44.887006+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:45.887178+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186069 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:46.887324+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:47.887506+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:48.887739+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:49.887865+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:50.887993+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186069 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:51.888178+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:52.888305+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:53.888467+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:54.888611+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:55.888745+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186069 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:56.888894+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:57.889044+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:58.889223+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:59.889364+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:00.889502+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186069 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:01.889642+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:02.889797+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:03.890017+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:04.890170+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:05.890312+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:06.890493+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186069 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c025e400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e400 session 0x5634bf7bf4a0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c025e400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e400 session 0x5634bfa2a1e0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bfa2bc20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:07.890597+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bfa2a3c0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf726800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.712280273s of 27.752235413s, submitted: 13
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:08.890788+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 29245440 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa296000/0x0/0x4ffc00000, data 0x1311e9a/0x13d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf726800 session 0x5634bfa2a000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf2252c0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf224960
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf08a1e0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf726800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf726800 session 0x5634bf7bef00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:09.890921+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:10.891065+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:11.891190+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228500 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa296000/0x0/0x4ffc00000, data 0x1311e9a/0x13d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:12.891311+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:13.891436+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:14.891588+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa296000/0x0/0x4ffc00000, data 0x1311e9a/0x13d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:15.891740+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:16.891876+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228500 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:17.892061+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c025e000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:18.892287+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.442607880s of 10.648617744s, submitted: 19
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e000 session 0x5634bdc7c780
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa296000/0x0/0x4ffc00000, data 0x1311e9a/0x13d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113442816 unmapped: 29261824 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0a800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0b800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:19.892577+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113500160 unmapped: 29204480 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:20.892783+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 28254208 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:21.893213+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259524 data_alloc: 234881024 data_used: 16396288
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa272000/0x0/0x4ffc00000, data 0x1335e9a/0x13fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:22.893475+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:23.893851+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:24.894131+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa272000/0x0/0x4ffc00000, data 0x1335e9a/0x13fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:25.894429+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:26.894601+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259524 data_alloc: 234881024 data_used: 16396288
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:27.894925+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa272000/0x0/0x4ffc00000, data 0x1335e9a/0x13fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:28.895244+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:29.895505+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa272000/0x0/0x4ffc00000, data 0x1335e9a/0x13fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 27885568 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:30.895777+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 27885568 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:31.895924+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.894562721s of 12.903012276s, submitted: 2
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261912 data_alloc: 234881024 data_used: 16449536
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 27590656 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:32.896144+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 27582464 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:33.896358+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:34.896583+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:35.896750+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1380e9a/0x1445000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1380e9a/0x1445000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:36.896886+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274846 data_alloc: 234881024 data_used: 16560128
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:37.897009+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:38.897187+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:39.897470+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1380e9a/0x1445000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:40.897910+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:41.898163+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274846 data_alloc: 234881024 data_used: 16560128
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:42.898432+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:43.898630+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:44.898934+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1380e9a/0x1445000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:45.899121+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.734338760s of 14.816822052s, submitted: 27
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634c02e01e0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0b800 session 0x5634bcfe1a40
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:46.899457+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0a800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270054 data_alloc: 234881024 data_used: 16560128
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bf08be00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:47.899649+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:48.899864+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:49.900009+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:50.900228+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:51.900455+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193664 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:52.900674+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:53.900861+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:54.901032+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:55.901161+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:56.901320+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193664 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:57.901442+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:58.901614+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:59.902015+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:00.902197+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:01.902360+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193664 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:02.902520+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:03.902636+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:04.902800+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:05.902913+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:06.903078+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193664 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:07.903280+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.250144958s of 21.353006363s, submitted: 31
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bcfe2b40
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf1da5a0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf726800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf726800 session 0x5634bfcb30e0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf82eb40
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634c029af00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:08.903484+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa2aa000/0x0/0x4ffc00000, data 0x12fee8a/0x13c2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:09.903661+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:10.903894+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:11.904020+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237405 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:12.904267+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:13.904491+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0a800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:14.904658+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bfa2ab40
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa2aa000/0x0/0x4ffc00000, data 0x12fee8a/0x13c2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0b800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c025e000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115171328 unmapped: 27533312 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:15.904831+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa286000/0x0/0x4ffc00000, data 0x1322e8a/0x13e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 27664384 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:16.905021+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269042 data_alloc: 234881024 data_used: 16138240
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 26927104 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:17.905216+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 26927104 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:18.905512+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa286000/0x0/0x4ffc00000, data 0x1322e8a/0x13e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 26927104 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa286000/0x0/0x4ffc00000, data 0x1322e8a/0x13e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:19.905686+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 26927104 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:20.905842+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115785728 unmapped: 26918912 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:21.906026+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269042 data_alloc: 234881024 data_used: 16138240
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115785728 unmapped: 26918912 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:22.906167+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115785728 unmapped: 26918912 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c08ea000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08ea000 session 0x5634bf445a40
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:23.906294+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c08ea400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08ea400 session 0x5634bf7bfe00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bd4614a0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf029e00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0a800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.969479561s of 16.080394745s, submitted: 32
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bfe225a0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c08ea000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08ea000 session 0x5634c06cd680
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa286000/0x0/0x4ffc00000, data 0x1322e8a/0x13e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c08ea800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08ea800 session 0x5634bf08a960
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c02e14a0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634c06e65a0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 26763264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:24.906435+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 26763264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:25.906604+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 26763264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:26.906809+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303056 data_alloc: 234881024 data_used: 16138240
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 26763264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:27.907035+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 21438464 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:28.907899+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9af9000/0x0/0x4ffc00000, data 0x1aa6e9a/0x1b6b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 22560768 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:29.908084+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 22560768 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:30.908289+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 22560768 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0a800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bfcb25a0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:31.908478+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336043 data_alloc: 234881024 data_used: 17305600
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 120160256 unmapped: 22544384 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:32.908642+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c08ea000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c08eac00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 22536192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:33.908782+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9af7000/0x0/0x4ffc00000, data 0x1ab0e9a/0x1b75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 21209088 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:34.908911+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.558697701s of 10.774977684s, submitted: 67
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:35.909039+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:36.909229+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365375 data_alloc: 234881024 data_used: 21561344
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9af7000/0x0/0x4ffc00000, data 0x1ab0e9a/0x1b75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:37.909384+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:38.909603+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:39.909735+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:40.909876+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:41.910013+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365375 data_alloc: 234881024 data_used: 21561344
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:42.910151+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9af7000/0x0/0x4ffc00000, data 0x1ab0e9a/0x1b75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:43.910296+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:44.910490+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.470090866s of 10.473713875s, submitted: 1
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 19791872 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:45.910636+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 123043840 unmapped: 19660800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:46.910801+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1374765 data_alloc: 234881024 data_used: 21581824
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 18587648 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:47.911461+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 18587648 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:48.911648+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 18587648 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a0f000/0x0/0x4ffc00000, data 0x1b90e9a/0x1c55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:49.911780+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124149760 unmapped: 18554880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:50.911948+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124149760 unmapped: 18554880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:51.912097+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a0f000/0x0/0x4ffc00000, data 0x1b90e9a/0x1c55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382269 data_alloc: 234881024 data_used: 21577728
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124149760 unmapped: 18554880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:52.912234+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a0f000/0x0/0x4ffc00000, data 0x1b90e9a/0x1c55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124149760 unmapped: 18554880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:53.912384+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124149760 unmapped: 18554880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:54.912557+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124149760 unmapped: 18554880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:55.912682+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a0f000/0x0/0x4ffc00000, data 0x1b90e9a/0x1c55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124182528 unmapped: 18522112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:56.912805+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08ea000 session 0x5634bcbf9e00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.591684341s of 11.705703735s, submitted: 37
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08eac00 session 0x5634c06b03c0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1377601 data_alloc: 234881024 data_used: 21577728
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c08eac00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 20422656 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08eac00 session 0x5634bf7be1e0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:57.912927+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 20389888 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:58.914917+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9f1c000/0x0/0x4ffc00000, data 0x168ce8a/0x1750000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 20389888 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:59.917494+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 20389888 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:00.917705+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0b800 session 0x5634bf4450e0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e000 session 0x5634c06e72c0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119160832 unmapped: 23543808 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf2245a0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:01.919595+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207973 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:02.920917+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:03.922353+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:04.923563+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:05.924390+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:06.924840+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207973 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:07.925048+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:08.925877+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:09.926483+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:10.926699+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:11.926892+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207973 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:12.927235+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:13.927499+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:14.927768+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:15.927943+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:16.928165+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207973 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:17.928353+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:18.928827+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:19.929125+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:20.929561+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:21.929858+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bcfce780
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bd463e00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0b800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0b800 session 0x5634bd4632c0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207973 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c025e000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e000 session 0x5634bf443c20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c08eac00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.324642181s of 25.657859802s, submitted: 76
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,0,0,2])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 17776640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:22.930030+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08eac00 session 0x5634be1ada40
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0a800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bf7c10e0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0a800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bf53a3c0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf53b4a0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0b800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0b800 session 0x5634bd031680
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:23.930289+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:24.930546+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:25.930753+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:26.930951+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285564 data_alloc: 234881024 data_used: 12169216
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:27.931219+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a11000/0x0/0x4ffc00000, data 0x1786e9a/0x184b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:28.931377+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c025e000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e000 session 0x5634bd20b680
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c08eac00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08eac00 session 0x5634bd20bc20
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:29.931629+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a11000/0x0/0x4ffc00000, data 0x1786e9a/0x184b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:30.931898+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c08eac00
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08eac00 session 0x5634bd20b860
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c029b2c0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:31.932035+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118710272 unmapped: 23994368 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0a800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0b800
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287325 data_alloc: 234881024 data_used: 12169216
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:32.932230+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:33.932369+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:34.932541+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:35.932743+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a11000/0x0/0x4ffc00000, data 0x1786e9a/0x184b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:36.932973+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353141 data_alloc: 234881024 data_used: 21815296
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:37.933145+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:38.933447+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a11000/0x0/0x4ffc00000, data 0x1786e9a/0x184b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:39.933692+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a11000/0x0/0x4ffc00000, data 0x1786e9a/0x184b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:40.933956+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:41.934357+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353141 data_alloc: 234881024 data_used: 21815296
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:42.934696+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.633270264s of 20.794521332s, submitted: 35
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:43.934851+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129015808 unmapped: 13688832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:44.934949+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 128851968 unmapped: 13852672 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8f68000/0x0/0x4ffc00000, data 0x2227e9a/0x22ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:45.935092+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 128991232 unmapped: 13713408 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:46.935346+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 128991232 unmapped: 13713408 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8ebf000/0x0/0x4ffc00000, data 0x22d8e9a/0x239d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457277 data_alloc: 234881024 data_used: 22888448
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:47.935487+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 13680640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:48.935671+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 13680640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:49.935780+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129032192 unmapped: 13672448 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:50.935998+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129105920 unmapped: 13598720 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8e9b000/0x0/0x4ffc00000, data 0x22fce9a/0x23c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:51.936167+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129105920 unmapped: 13598720 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1454133 data_alloc: 234881024 data_used: 22888448
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:52.936298+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129105920 unmapped: 13598720 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:53.936464+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129114112 unmapped: 13590528 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:54.936593+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129114112 unmapped: 13590528 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.613185883s of 11.909707069s, submitted: 129
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8e9b000/0x0/0x4ffc00000, data 0x22fce9a/0x23c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:55.936751+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 13516800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:56.936868+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 13516800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8e94000/0x0/0x4ffc00000, data 0x2303e9a/0x23c8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1453885 data_alloc: 234881024 data_used: 22888448
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:57.936981+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 13516800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:58.937215+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 13516800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8e94000/0x0/0x4ffc00000, data 0x2303e9a/0x23c8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:59.937473+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 13516800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bf82f0e0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0b800 session 0x5634bfcb3860
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:00.965445+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 13516800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c025e000
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e000 session 0x5634bd20b2c0
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:01.966190+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:02.966380+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:03.966521+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:04.966659+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:05.966832+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:06.966967+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:07.967254+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:08.967384+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:09.967547+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:10.967696+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:11.967815+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:12.967953+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:13.968097+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:14.968270+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:15.969081+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:16.969265+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:17.969415+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:18.969569+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:19.969708+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:20.969843+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:21.970017+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:22.970145+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:23.970327+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:24.970527+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:25.971157+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:26.971336+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:27.971464+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:28.971632+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:29.971773+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:30.971994+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:31.972205+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:32.972379+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:33.972575+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:34.972732+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:35.973006+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:36.973172+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:37.973301+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 20602880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:38.974007+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 20602880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:39.974156+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 20602880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:40.974385+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 20602880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:41.974583+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:42.974693+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:43.974803+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:44.974925+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:45.975089+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:46.975332+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:47.975459+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:48.975627+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:49.975823+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:50.975980+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:51.976127+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:52.976252+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:53.976378+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:54.976504+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:55.976667+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:56.976863+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:57.977009+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:58.977160+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:59.977297+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:00.977481+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:01.977627+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:02.977737+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:03.977844+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:04.978047+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:05.978217+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:06.978334+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:07.978477+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:08.979464+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:09.979584+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:10.980043+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:11.980190+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:12.980302+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:13.980578+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:14.980962+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:15.981216+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:16.981363+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:17.981500+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:18.981788+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:19.981909+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:20.982027+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:21.982139+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122150912 unmapped: 20553728 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:56 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:56 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:22.982264+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: do_command 'config diff' '{prefix=config diff}'
Nov 24 10:05:56 compute-1 ceph-osd[77497]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:23.982367+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: do_command 'config show' '{prefix=config show}'
Nov 24 10:05:56 compute-1 ceph-osd[77497]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 24 10:05:56 compute-1 ceph-osd[77497]: do_command 'counter dump' '{prefix=counter dump}'
Nov 24 10:05:56 compute-1 ceph-osd[77497]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 24 10:05:56 compute-1 ceph-osd[77497]: do_command 'counter schema' '{prefix=counter schema}'
Nov 24 10:05:56 compute-1 ceph-osd[77497]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 20537344 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:24.982827+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 20865024 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:05:56 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:25.982990+0000)
Nov 24 10:05:56 compute-1 ceph-osd[77497]: do_command 'log dump' '{prefix=log dump}'
Nov 24 10:05:56 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 24 10:05:56 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2847382315' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:05:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:56.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 24 10:05:57 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2915704996' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:05:57 compute-1 ceph-mon[80009]: from='client.25438 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:57 compute-1 ceph-mon[80009]: from='client.26825 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:57 compute-1 ceph-mon[80009]: from='client.17232 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:57 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2180658346' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:05:57 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2333924499' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:05:57 compute-1 ceph-mon[80009]: from='client.25453 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:57 compute-1 ceph-mon[80009]: from='client.26837 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:57 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2716126495' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:05:57 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/419866527' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:05:57 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2559918353' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:05:57 compute-1 ceph-mon[80009]: from='client.26849 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:57 compute-1 ceph-mon[80009]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:05:57 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/45030159' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:05:57 compute-1 ceph-mon[80009]: pgmap v1176: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:57 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1423102505' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:05:57 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/220911383' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:05:57 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2847382315' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:05:57 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/317736950' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:05:57 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/533231242' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 10:05:57 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/926336242' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:05:57 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3854978652' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 10:05:57 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2915704996' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:05:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:57.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:57 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 24 10:05:57 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2950687480' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:05:58 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Nov 24 10:05:58 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2888619743' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 10:05:58 compute-1 ceph-mon[80009]: from='client.26867 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:58 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3256066392' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:05:58 compute-1 ceph-mon[80009]: from='client.26879 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:58 compute-1 ceph-mon[80009]: from='client.25504 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:58 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1363121793' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:05:58 compute-1 ceph-mon[80009]: from='client.17325 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:58 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2950687480' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:05:58 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/239825461' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 10:05:58 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1019638436' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 10:05:58 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3163920426' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 10:05:58 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3209223688' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 10:05:58 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2888619743' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 10:05:58 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3916645189' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:05:58 compute-1 crontab[246638]: (root) LIST (root)
Nov 24 10:05:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:58.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Nov 24 10:05:59 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1381658976' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 24 10:05:59 compute-1 ceph-mon[80009]: from='client.26894 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:59 compute-1 ceph-mon[80009]: from='client.26915 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:05:59 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3328825171' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:05:59 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1191403775' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 10:05:59 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3784852311' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 10:05:59 compute-1 ceph-mon[80009]: pgmap v1177: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:59 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1380113186' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:05:59 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3134986875' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:05:59 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1381658976' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 24 10:05:59 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2024092312' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:05:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:05:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Nov 24 10:05:59 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2387178002' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 10:05:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:05:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:59.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Nov 24 10:05:59 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2839996989' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 10:05:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Nov 24 10:05:59 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/632773887' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 10:06:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Nov 24 10:06:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3737416269' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:00 compute-1 ceph-mon[80009]: from='client.26939 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:00 compute-1 ceph-mon[80009]: from='client.25543 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:00 compute-1 ceph-mon[80009]: from='client.17385 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:00 compute-1 ceph-mon[80009]: from='client.26948 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:00 compute-1 ceph-mon[80009]: from='client.25558 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:00 compute-1 ceph-mon[80009]: from='client.17406 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:00 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/953032884' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:06:00 compute-1 ceph-mon[80009]: from='client.26969 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:00 compute-1 ceph-mon[80009]: from='client.25576 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:00 compute-1 ceph-mon[80009]: from='client.17421 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:00 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2387178002' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 10:06:00 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2592132238' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:06:00 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1267946720' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:06:00 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2839996989' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 10:06:00 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/632773887' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 10:06:00 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/4093387687' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:06:00 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2372817951' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:06:00 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3402713778' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 24 10:06:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:06:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:06:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Nov 24 10:06:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2953229075' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 10:06:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Nov 24 10:06:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1925814334' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 10:06:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:06:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:00.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:06:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Nov 24 10:06:01 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1699553728' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 10:06:01 compute-1 nova_compute[230010]: 2025-11-24 10:06:01.236 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:01 compute-1 nova_compute[230010]: 2025-11-24 10:06:01.301 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:01 compute-1 podman[247011]: 2025-11-24 10:06:01.35169983 +0000 UTC m=+0.082361245 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 10:06:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Nov 24 10:06:01 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2545531661' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 10:06:01 compute-1 ceph-mon[80009]: from='client.25585 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:01 compute-1 ceph-mon[80009]: from='client.17442 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:01 compute-1 ceph-mon[80009]: from='client.25600 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:01 compute-1 ceph-mon[80009]: from='client.17463 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3737416269' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:06:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/102652043' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:06:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/4273539614' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:06:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2953229075' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 10:06:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1925814334' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 10:06:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3561731369' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 10:06:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/231459924' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 10:06:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1699553728' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 10:06:01 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2545531661' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 10:06:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:01.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:01 compute-1 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 24 10:06:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Nov 24 10:06:01 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3919931828' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 24 10:06:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Nov 24 10:06:01 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4204316792' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 24 10:06:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 10:06:01 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1906280520' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:06:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 10:06:01 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1906280520' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:06:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 24 10:06:02 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3987749054' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 10:06:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Nov 24 10:06:02 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3328823182' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 24 10:06:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Nov 24 10:06:02 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4061091675' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 24 10:06:02 compute-1 systemd[1]: Starting Hostname Service...
Nov 24 10:06:02 compute-1 ceph-mon[80009]: pgmap v1178: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:02 compute-1 ceph-mon[80009]: from='client.25627 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:02 compute-1 ceph-mon[80009]: from='client.17478 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:02 compute-1 ceph-mon[80009]: from='client.25639 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:02 compute-1 ceph-mon[80009]: from='client.17502 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:02 compute-1 ceph-mon[80009]: from='client.25657 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:02 compute-1 ceph-mon[80009]: from='client.17529 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3919931828' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 24 10:06:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/4204316792' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 24 10:06:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/1906280520' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:06:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/1906280520' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:06:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3987749054' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 10:06:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2638530521' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 24 10:06:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1758111733' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 24 10:06:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3328823182' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 24 10:06:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/4061091675' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 24 10:06:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3506976665' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 10:06:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2769527818' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 10:06:02 compute-1 systemd[1]: Started Hostname Service.
Nov 24 10:06:02 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Nov 24 10:06:02 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/811620630' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 24 10:06:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:02.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:03.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:03 compute-1 ceph-mon[80009]: from='client.25675 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:03 compute-1 ceph-mon[80009]: from='client.17544 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:03 compute-1 ceph-mon[80009]: from='client.25702 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:03 compute-1 ceph-mon[80009]: from='client.17574 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:03 compute-1 ceph-mon[80009]: pgmap v1179: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:03 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/811620630' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 24 10:06:03 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1472469966' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 10:06:03 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/4187827911' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 10:06:03 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2337431009' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 10:06:03 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2529875961' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 10:06:03 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/4255283993' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 24 10:06:03 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1514098715' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 24 10:06:03 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1367571744' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Nov 24 10:06:04 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4110595336' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 24 10:06:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:06:04 compute-1 ceph-mon[80009]: from='client.25714 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-1 ceph-mon[80009]: from='client.17598 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-1 ceph-mon[80009]: from='client.27125 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:04 compute-1 ceph-mon[80009]: from='client.27134 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-1 ceph-mon[80009]: from='client.27143 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:04 compute-1 ceph-mon[80009]: from='client.27149 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1216611343' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 10:06:04 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1445369186' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1449981449' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 10:06:04 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1880844870' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 10:06:04 compute-1 ceph-mon[80009]: from='client.27170 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/108309168' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3029797191' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3220301944' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 10:06:04 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/4110595336' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 24 10:06:04 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1577620300' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 24 10:06:04 compute-1 ceph-mon[80009]: from='client.27185 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2083374271' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2072392436' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-1 ceph-mon[80009]: pgmap v1180: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0)
Nov 24 10:06:04 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2200727252' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 24 10:06:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:04.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:05 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Nov 24 10:06:05 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/812305037' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:06:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:06:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:05.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:06:05 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1998033834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 24 10:06:05 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3410044772' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 10:06:05 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2200727252' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 24 10:06:05 compute-1 ceph-mon[80009]: from='client.27200 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:05 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2929018559' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 24 10:06:05 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3321027459' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 24 10:06:05 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2931140467' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 10:06:05 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1668517519' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 24 10:06:05 compute-1 ceph-mon[80009]: from='client.27218 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:05 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/812305037' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:06:05 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3166199330' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 24 10:06:05 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1681524618' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 24 10:06:05 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/348803258' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 24 10:06:05 compute-1 ceph-mon[80009]: from='client.17757 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:05 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Nov 24 10:06:05 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1332274527' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 24 10:06:06 compute-1 nova_compute[230010]: 2025-11-24 10:06:06.238 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:06 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:06:06 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:06:06 compute-1 nova_compute[230010]: 2025-11-24 10:06:06.304 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:06 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Nov 24 10:06:06 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2999518195' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 24 10:06:06 compute-1 ceph-mon[80009]: from='client.27233 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:06 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1332274527' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 24 10:06:06 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/426060483' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 24 10:06:06 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2992184342' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 24 10:06:06 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:06:06 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:06:06 compute-1 ceph-mon[80009]: from='client.25819 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:06 compute-1 ceph-mon[80009]: from='client.17778 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:06 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:06:06 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:06:06 compute-1 ceph-mon[80009]: from='client.17793 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:06 compute-1 ceph-mon[80009]: from='client.25828 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:06 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:06:06 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:06:06 compute-1 ceph-mon[80009]: from='client.25840 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:06 compute-1 ceph-mon[80009]: from='client.17817 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:06 compute-1 ceph-mon[80009]: from='client.25855 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:06 compute-1 ceph-mon[80009]: pgmap v1181: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:06 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2999518195' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 24 10:06:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:06.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:07.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:07 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Nov 24 10:06:07 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1880514736' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 24 10:06:07 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1003062013' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 24 10:06:07 compute-1 ceph-mon[80009]: from='client.17832 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:07 compute-1 ceph-mon[80009]: from='client.25870 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:07 compute-1 ceph-mon[80009]: from='client.27308 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:07 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1380896036' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 24 10:06:07 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/218252077' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 24 10:06:07 compute-1 ceph-mon[80009]: from='client.27323 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:07 compute-1 ceph-mon[80009]: from='client.25888 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:07 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1880514736' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 24 10:06:07 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/335999762' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:06:08 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0)
Nov 24 10:06:08 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2982781899' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 24 10:06:08 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:06:08 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:06:08 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Nov 24 10:06:08 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1811800393' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 24 10:06:08 compute-1 ceph-mon[80009]: from='client.17874 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:08 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2545381563' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 24 10:06:08 compute-1 ceph-mon[80009]: from='client.25900 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:08 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/293589835' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 24 10:06:08 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2982781899' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 24 10:06:08 compute-1 ceph-mon[80009]: from='client.17892 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:08 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:06:08 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:06:08 compute-1 ceph-mon[80009]: from='client.25912 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:08 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3641594844' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:06:08 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:06:08 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:06:08 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:06:08 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:06:08 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1811800393' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 24 10:06:08 compute-1 ceph-mon[80009]: from='client.17913 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:08.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:08 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Nov 24 10:06:08 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2080032816' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 24 10:06:09 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:06:09 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:06:09 compute-1 sudo[248046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:06:09 compute-1 sudo[248046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:06:09 compute-1 sudo[248046]: pam_unix(sudo:session): session closed for user root
Nov 24 10:06:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:06:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:09.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:09 compute-1 ceph-mon[80009]: pgmap v1182: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:09 compute-1 ceph-mon[80009]: from='client.25930 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:09 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2699018512' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 24 10:06:09 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2892553981' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 24 10:06:09 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:06:09 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:06:09 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2080032816' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 24 10:06:09 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:06:09 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:06:09 compute-1 ceph-mon[80009]: from='client.17949 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:09 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:06:09 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:06:09 compute-1 ceph-mon[80009]: from='client.27389 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:09 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/501692901' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 24 10:06:09 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2699081811' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 24 10:06:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Nov 24 10:06:09 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2135132105' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 24 10:06:10 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Nov 24 10:06:10 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3311372601' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 24 10:06:10 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2135132105' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 24 10:06:10 compute-1 ceph-mon[80009]: from='client.25978 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:10 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3185763502' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 24 10:06:10 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3311372601' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 24 10:06:10 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1883798475' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 24 10:06:10 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/138290104' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 24 10:06:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:10.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:11 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Nov 24 10:06:11 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1209386023' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 24 10:06:11 compute-1 nova_compute[230010]: 2025-11-24 10:06:11.240 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:11 compute-1 nova_compute[230010]: 2025-11-24 10:06:11.305 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:11.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:11 compute-1 ceph-mon[80009]: pgmap v1183: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:11 compute-1 ceph-mon[80009]: from='client.27428 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:11 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1685863189' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 24 10:06:11 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2910977602' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 24 10:06:11 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1209386023' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 24 10:06:11 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3221716505' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 24 10:06:11 compute-1 ceph-mon[80009]: from='client.18015 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:11 compute-1 ceph-mon[80009]: from='client.27452 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Nov 24 10:06:12 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2394133039' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 24 10:06:12 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Nov 24 10:06:12 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3082189847' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 24 10:06:12 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1577556685' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 24 10:06:12 compute-1 ceph-mon[80009]: from='client.27458 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:12 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3027317944' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 24 10:06:12 compute-1 ceph-mon[80009]: from='client.26008 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:12 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2394133039' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 24 10:06:12 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2354909685' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 24 10:06:12 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3554526462' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 24 10:06:12 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3082189847' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 24 10:06:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:12.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:13 compute-1 ovs-appctl[248965]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 24 10:06:13 compute-1 ovs-appctl[248973]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 24 10:06:13 compute-1 ovs-appctl[248979]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 24 10:06:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:06:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:13.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:06:13 compute-1 sudo[249114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:06:13 compute-1 sudo[249114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:06:13 compute-1 sudo[249114]: pam_unix(sudo:session): session closed for user root
Nov 24 10:06:13 compute-1 sudo[249179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:06:13 compute-1 sudo[249179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:06:13 compute-1 ceph-mon[80009]: pgmap v1184: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:13 compute-1 ceph-mon[80009]: from='client.18057 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:13 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1530786313' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 24 10:06:13 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2372932552' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 24 10:06:13 compute-1 ceph-mon[80009]: from='client.27488 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:13 compute-1 ceph-mon[80009]: from='client.18078 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Nov 24 10:06:14 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3408717226' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 24 10:06:14 compute-1 sudo[249179]: pam_unix(sudo:session): session closed for user root
Nov 24 10:06:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:06:14 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:06:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 10:06:14 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:06:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:06:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:06:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Nov 24 10:06:14 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3492086322' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 24 10:06:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 10:06:14 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:06:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 10:06:14 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:06:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:06:14 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:06:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:06:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:14.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:15 compute-1 ceph-mon[80009]: from='client.27494 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:15 compute-1 ceph-mon[80009]: from='client.26035 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:15 compute-1 ceph-mon[80009]: from='client.18090 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:15 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1514606181' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 24 10:06:15 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3408717226' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 24 10:06:15 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3088268736' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 24 10:06:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:06:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:06:15 compute-1 ceph-mon[80009]: pgmap v1185: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:06:15 compute-1 ceph-mon[80009]: from='client.26050 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:15 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:06:15 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:06:15 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3492086322' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 24 10:06:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:06:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:06:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:06:15 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/566409870' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 24 10:06:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:06:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:06:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:15.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 24 10:06:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2984097938' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 10:06:16 compute-1 ceph-mon[80009]: from='client.26059 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:16 compute-1 ceph-mon[80009]: from='client.27524 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:16 compute-1 ceph-mon[80009]: from='client.18129 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:16 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2667697992' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 24 10:06:16 compute-1 ceph-mon[80009]: from='client.27530 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:16 compute-1 ceph-mon[80009]: from='client.18141 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:06:16 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1865868444' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 24 10:06:16 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3846290914' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 24 10:06:16 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2984097938' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 10:06:16 compute-1 nova_compute[230010]: 2025-11-24 10:06:16.243 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:16 compute-1 nova_compute[230010]: 2025-11-24 10:06:16.307 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:16 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Nov 24 10:06:16 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/827779880' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:16.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:17 compute-1 ceph-mon[80009]: from='client.26089 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:17 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3991889880' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 24 10:06:17 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/156699043' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 24 10:06:17 compute-1 ceph-mon[80009]: from='client.26098 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:17 compute-1 ceph-mon[80009]: pgmap v1186: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:17 compute-1 ceph-mon[80009]: from='client.18171 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:17 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/827779880' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:17 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2621451589' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 24 10:06:17 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1903195685' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 24 10:06:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:17.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:17 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Nov 24 10:06:17 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1759652024' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:06:18 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Nov 24 10:06:18 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3181046644' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 24 10:06:18 compute-1 ceph-mon[80009]: from='client.18177 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:18 compute-1 ceph-mon[80009]: from='client.27572 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:18 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/575993812' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 10:06:18 compute-1 ceph-mon[80009]: from='client.18192 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:18 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1759652024' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:06:18 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/242219174' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 24 10:06:18 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3181046644' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 24 10:06:18 compute-1 podman[250600]: 2025-11-24 10:06:18.348734495 +0000 UTC m=+0.078160302 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 10:06:18 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Nov 24 10:06:18 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1404093567' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:18 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Nov 24 10:06:18 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3837430605' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 24 10:06:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:18.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:06:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:06:19 compute-1 ceph-mon[80009]: from='client.26125 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:19 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1696960636' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 10:06:19 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/10305290' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:19 compute-1 ceph-mon[80009]: pgmap v1187: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:06:19 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1404093567' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:19 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2189890635' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 24 10:06:19 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3837430605' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 24 10:06:19 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1243027862' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:19 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:06:19 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:06:19 compute-1 sudo[250743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:06:19 compute-1 sudo[250743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:06:19 compute-1 sudo[250743]: pam_unix(sudo:session): session closed for user root
Nov 24 10:06:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:06:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:19.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Nov 24 10:06:19 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1692125416' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 24 10:06:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:06:20.069 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:06:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:06:20.070 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:06:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:06:20.070 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:06:20 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Nov 24 10:06:20 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2133798779' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:20 compute-1 ceph-mon[80009]: from='client.18216 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:20 compute-1 ceph-mon[80009]: from='client.27620 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:20 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2432724872' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:06:20 compute-1 ceph-mon[80009]: from='client.26161 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:20 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/334201095' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 24 10:06:20 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1692125416' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 24 10:06:20 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3264426233' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:06:20 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2910796446' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:20.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:21 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Nov 24 10:06:21 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1885530590' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 24 10:06:21 compute-1 nova_compute[230010]: 2025-11-24 10:06:21.249 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:21 compute-1 nova_compute[230010]: 2025-11-24 10:06:21.309 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:21 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2133798779' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:21 compute-1 ceph-mon[80009]: pgmap v1188: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:06:21 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/178353154' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 24 10:06:21 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/646906727' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 24 10:06:21 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3278052733' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:21 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1885530590' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 24 10:06:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:06:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:21.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:06:21 compute-1 nova_compute[230010]: 2025-11-24 10:06:21.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.161431) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978782161545, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2514, "num_deletes": 507, "total_data_size": 5264967, "memory_usage": 5337648, "flush_reason": "Manual Compaction"}
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978782185332, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 3407008, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33657, "largest_seqno": 36166, "table_properties": {"data_size": 3396428, "index_size": 5986, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3525, "raw_key_size": 28259, "raw_average_key_size": 20, "raw_value_size": 3371900, "raw_average_value_size": 2413, "num_data_blocks": 257, "num_entries": 1397, "num_filter_entries": 1397, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763978623, "oldest_key_time": 1763978623, "file_creation_time": 1763978782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 23988 microseconds, and 10501 cpu microseconds.
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.185433) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 3407008 bytes OK
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.185463) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.186776) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.186789) EVENT_LOG_v1 {"time_micros": 1763978782186785, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.186807) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 5251950, prev total WAL file size 5251950, number of live WAL files 2.
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.188050) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323533' seq:72057594037927935, type:22 .. '6B7600353034' seq:0, type:0; will stop at (end)
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(3327KB)], [63(13MB)]
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978782188122, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 17516738, "oldest_snapshot_seqno": -1}
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 6627 keys, 16035146 bytes, temperature: kUnknown
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978782271618, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 16035146, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15989410, "index_size": 28088, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16581, "raw_key_size": 172891, "raw_average_key_size": 26, "raw_value_size": 15868704, "raw_average_value_size": 2394, "num_data_blocks": 1114, "num_entries": 6627, "num_filter_entries": 6627, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763978782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.271838) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 16035146 bytes
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.273554) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.6 rd, 191.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 13.5 +0.0 blob) out(15.3 +0.0 blob), read-write-amplify(9.8) write-amplify(4.7) OK, records in: 7656, records dropped: 1029 output_compression: NoCompression
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.273591) EVENT_LOG_v1 {"time_micros": 1763978782273577, "job": 38, "event": "compaction_finished", "compaction_time_micros": 83561, "compaction_time_cpu_micros": 31701, "output_level": 6, "num_output_files": 1, "total_output_size": 16035146, "num_input_records": 7656, "num_output_records": 6627, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978782274288, "job": 38, "event": "table_file_deletion", "file_number": 65}
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978782277064, "job": 38, "event": "table_file_deletion", "file_number": 63}
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.187937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.277135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.277144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.277147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.277149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:06:22 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.277151) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:06:22 compute-1 ceph-mon[80009]: from='client.27650 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:22 compute-1 ceph-mon[80009]: from='client.18276 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:22 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2219254769' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 24 10:06:22 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3007393360' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 24 10:06:22 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2249032923' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:22 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3105119572' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 24 10:06:22 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Nov 24 10:06:22 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2599335431' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:22 compute-1 nova_compute[230010]: 2025-11-24 10:06:22.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:06:22 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Nov 24 10:06:22 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2897248161' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 24 10:06:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:22.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:23 compute-1 ceph-mon[80009]: from='client.27668 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:23 compute-1 ceph-mon[80009]: from='client.26197 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:23 compute-1 ceph-mon[80009]: from='client.27677 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:23 compute-1 ceph-mon[80009]: from='client.18306 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:23 compute-1 ceph-mon[80009]: pgmap v1189: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:23 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2599335431' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:23 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3856428163' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 24 10:06:23 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/430424438' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:23 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2897248161' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 24 10:06:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:23.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:23 compute-1 nova_compute[230010]: 2025-11-24 10:06:23.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:06:23 compute-1 nova_compute[230010]: 2025-11-24 10:06:23.766 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:06:24 compute-1 podman[251107]: 2025-11-24 10:06:24.504368804 +0000 UTC m=+0.099568200 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 24 10:06:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Nov 24 10:06:24 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2550223427' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 24 10:06:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:06:24 compute-1 ceph-mon[80009]: from='client.18330 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:24 compute-1 ceph-mon[80009]: from='client.26224 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:24 compute-1 ceph-mon[80009]: from='client.27710 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:24 compute-1 ceph-mon[80009]: from='client.18339 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:24 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3001595460' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 24 10:06:24 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2774141351' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:24 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2543482892' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 24 10:06:24 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3308493931' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:06:24 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1469091592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:06:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:24.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:06:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:25.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:06:25 compute-1 ceph-mon[80009]: from='client.18351 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-1 ceph-mon[80009]: from='client.26239 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-1 ceph-mon[80009]: from='client.26254 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-1 ceph-mon[80009]: pgmap v1190: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:06:25 compute-1 ceph-mon[80009]: from='client.18375 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2550223427' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1429568378' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-1 ceph-mon[80009]: from='client.18387 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-1 ceph-mon[80009]: from='client.27740 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2974131416' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3309388613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:06:25 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1370692082' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-1 ceph-mon[80009]: from='client.27749 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-1 ceph-mon[80009]: from='client.26275 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3466748518' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-1 nova_compute[230010]: 2025-11-24 10:06:25.759 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:06:26 compute-1 virtqemud[229578]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 24 10:06:26 compute-1 nova_compute[230010]: 2025-11-24 10:06:26.252 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:26 compute-1 nova_compute[230010]: 2025-11-24 10:06:26.311 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:26 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Nov 24 10:06:26 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1330152833' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 24 10:06:26 compute-1 systemd[1]: Starting Time & Date Service...
Nov 24 10:06:26 compute-1 systemd[1]: Started Time & Date Service.
Nov 24 10:06:26 compute-1 ceph-mon[80009]: from='client.26284 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:26 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1640188076' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 10:06:26 compute-1 ceph-mon[80009]: from='client.18435 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:26 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1815450278' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:06:26 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1330152833' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 24 10:06:26 compute-1 ceph-mon[80009]: from='client.18444 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:26 compute-1 ceph-mon[80009]: pgmap v1191: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:26 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3042987099' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 24 10:06:26 compute-1 nova_compute[230010]: 2025-11-24 10:06:26.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:06:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:06:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:26.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:06:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:27.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:27 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3664216049' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 10:06:27 compute-1 ceph-mon[80009]: from='client.26308 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:27 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/430674398' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 24 10:06:27 compute-1 ceph-mon[80009]: from='client.26314 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:27 compute-1 nova_compute[230010]: 2025-11-24 10:06:27.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:06:28 compute-1 nova_compute[230010]: 2025-11-24 10:06:28.002 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:06:28 compute-1 nova_compute[230010]: 2025-11-24 10:06:28.002 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:06:28 compute-1 nova_compute[230010]: 2025-11-24 10:06:28.003 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:06:28 compute-1 nova_compute[230010]: 2025-11-24 10:06:28.003 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:06:28 compute-1 nova_compute[230010]: 2025-11-24 10:06:28.003 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:06:28 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:06:28 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2719288102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:06:28 compute-1 nova_compute[230010]: 2025-11-24 10:06:28.496 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:06:28 compute-1 nova_compute[230010]: 2025-11-24 10:06:28.654 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:06:28 compute-1 nova_compute[230010]: 2025-11-24 10:06:28.655 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4669MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:06:28 compute-1 nova_compute[230010]: 2025-11-24 10:06:28.655 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:06:28 compute-1 nova_compute[230010]: 2025-11-24 10:06:28.656 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:06:28 compute-1 nova_compute[230010]: 2025-11-24 10:06:28.789 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:06:28 compute-1 nova_compute[230010]: 2025-11-24 10:06:28.790 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:06:28 compute-1 nova_compute[230010]: 2025-11-24 10:06:28.812 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:06:28 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2984933213' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 10:06:28 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2968096678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:06:28 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/257950614' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 24 10:06:28 compute-1 ceph-mon[80009]: pgmap v1192: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:28 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2719288102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:06:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:28.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:06:29 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/416144228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:06:29 compute-1 nova_compute[230010]: 2025-11-24 10:06:29.265 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:06:29 compute-1 nova_compute[230010]: 2025-11-24 10:06:29.273 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:06:29 compute-1 nova_compute[230010]: 2025-11-24 10:06:29.293 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:06:29 compute-1 nova_compute[230010]: 2025-11-24 10:06:29.295 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:06:29 compute-1 nova_compute[230010]: 2025-11-24 10:06:29.295 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:06:29 compute-1 sudo[251707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:06:29 compute-1 sudo[251707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:06:29 compute-1 sudo[251707]: pam_unix(sudo:session): session closed for user root
Nov 24 10:06:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:06:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:06:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:29.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:06:30 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2721553192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:06:30 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/416144228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:06:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:06:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:06:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:06:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:30.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:06:31 compute-1 ceph-mon[80009]: pgmap v1193: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:06:31 compute-1 nova_compute[230010]: 2025-11-24 10:06:31.254 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:31 compute-1 nova_compute[230010]: 2025-11-24 10:06:31.296 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:06:31 compute-1 nova_compute[230010]: 2025-11-24 10:06:31.296 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:06:31 compute-1 nova_compute[230010]: 2025-11-24 10:06:31.296 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:06:31 compute-1 nova_compute[230010]: 2025-11-24 10:06:31.314 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:31 compute-1 nova_compute[230010]: 2025-11-24 10:06:31.340 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:06:31 compute-1 nova_compute[230010]: 2025-11-24 10:06:31.340 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:06:31 compute-1 nova_compute[230010]: 2025-11-24 10:06:31.340 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:06:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:06:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:31.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:06:32 compute-1 podman[251734]: 2025-11-24 10:06:32.323336739 +0000 UTC m=+0.062878924 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 10:06:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:33.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:33 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 10:06:33 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 10K writes, 2914 syncs, 3.59 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2050 writes, 6533 keys, 2050 commit groups, 1.0 writes per commit group, ingest: 7.27 MB, 0.01 MB/s
                                           Interval WAL: 2050 writes, 892 syncs, 2.30 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 10:06:33 compute-1 ceph-mon[80009]: pgmap v1194: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:33.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:06:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:35.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:35 compute-1 ceph-mon[80009]: pgmap v1195: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:35.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:36 compute-1 nova_compute[230010]: 2025-11-24 10:06:36.258 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:36 compute-1 nova_compute[230010]: 2025-11-24 10:06:36.318 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:06:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:37.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:06:37 compute-1 ceph-mon[80009]: pgmap v1196: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:37.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:39.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:39 compute-1 ceph-mon[80009]: pgmap v1197: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:06:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:39.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:41.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:41 compute-1 nova_compute[230010]: 2025-11-24 10:06:41.263 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:41 compute-1 nova_compute[230010]: 2025-11-24 10:06:41.320 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:41 compute-1 ceph-mon[80009]: pgmap v1198: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:06:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:41.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:06:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:43.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:43 compute-1 ceph-mon[80009]: pgmap v1199: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:43.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:06:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:06:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:45.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:06:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:06:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:06:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:45.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:45 compute-1 ceph-mon[80009]: pgmap v1200: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:45 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:06:46 compute-1 nova_compute[230010]: 2025-11-24 10:06:46.266 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:46 compute-1 nova_compute[230010]: 2025-11-24 10:06:46.322 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:46 compute-1 ceph-mon[80009]: pgmap v1201: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:47.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:47.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:49.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:49 compute-1 podman[251761]: 2025-11-24 10:06:49.347634914 +0000 UTC m=+0.087530522 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:06:49 compute-1 ceph-mon[80009]: pgmap v1202: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:49 compute-1 sudo[251782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:06:49 compute-1 sudo[251782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:06:49 compute-1 sudo[251782]: pam_unix(sudo:session): session closed for user root
Nov 24 10:06:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:06:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:49.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:51.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:51 compute-1 nova_compute[230010]: 2025-11-24 10:06:51.269 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:51 compute-1 nova_compute[230010]: 2025-11-24 10:06:51.325 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:51 compute-1 ceph-mon[80009]: pgmap v1203: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:51.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:53.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:53 compute-1 ceph-mon[80009]: pgmap v1204: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:53.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:06:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:55.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:55 compute-1 podman[251810]: 2025-11-24 10:06:55.353650529 +0000 UTC m=+0.095537371 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 24 10:06:55 compute-1 ceph-mon[80009]: pgmap v1205: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:55.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:56 compute-1 nova_compute[230010]: 2025-11-24 10:06:56.272 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:56 compute-1 nova_compute[230010]: 2025-11-24 10:06:56.330 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:56 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 24 10:06:56 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 10:06:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:57.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.006000144s ======
Nov 24 10:06:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:57.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.006000144s
Nov 24 10:06:57 compute-1 ceph-mon[80009]: pgmap v1206: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:59 compute-1 ceph-mon[80009]: pgmap v1207: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:59.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:06:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:06:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:59.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:07:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:07:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:07:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:01.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:01 compute-1 nova_compute[230010]: 2025-11-24 10:07:01.277 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:01 compute-1 nova_compute[230010]: 2025-11-24 10:07:01.334 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:01 compute-1 ceph-mon[80009]: pgmap v1208: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:01.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:02 compute-1 podman[251846]: 2025-11-24 10:07:02.516470257 +0000 UTC m=+0.059804288 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0.
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.605474) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978822605516, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 687, "num_deletes": 251, "total_data_size": 1205587, "memory_usage": 1233264, "flush_reason": "Manual Compaction"}
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started
Nov 24 10:07:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/1642057191' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:07:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/1642057191' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978822612579, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 793238, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36171, "largest_seqno": 36853, "table_properties": {"data_size": 789716, "index_size": 1366, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8663, "raw_average_key_size": 19, "raw_value_size": 782517, "raw_average_value_size": 1803, "num_data_blocks": 60, "num_entries": 434, "num_filter_entries": 434, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763978782, "oldest_key_time": 1763978782, "file_creation_time": 1763978822, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 7136 microseconds, and 3278 cpu microseconds.
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.612613) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 793238 bytes OK
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.612635) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.615938) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.615953) EVENT_LOG_v1 {"time_micros": 1763978822615948, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.615968) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1201703, prev total WAL file size 1201703, number of live WAL files 2.
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.616733) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(774KB)], [66(15MB)]
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978822616773, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 16828384, "oldest_snapshot_seqno": -1}
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 6550 keys, 14696989 bytes, temperature: kUnknown
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978822706684, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 14696989, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14652718, "index_size": 26815, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 172276, "raw_average_key_size": 26, "raw_value_size": 14534312, "raw_average_value_size": 2218, "num_data_blocks": 1057, "num_entries": 6550, "num_filter_entries": 6550, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763978822, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.706958) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 14696989 bytes
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.710215) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.0 rd, 163.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 15.3 +0.0 blob) out(14.0 +0.0 blob), read-write-amplify(39.7) write-amplify(18.5) OK, records in: 7061, records dropped: 511 output_compression: NoCompression
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.710235) EVENT_LOG_v1 {"time_micros": 1763978822710226, "job": 40, "event": "compaction_finished", "compaction_time_micros": 90007, "compaction_time_cpu_micros": 44669, "output_level": 6, "num_output_files": 1, "total_output_size": 14696989, "num_input_records": 7061, "num_output_records": 6550, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978822710688, "job": 40, "event": "table_file_deletion", "file_number": 68}
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978822714085, "job": 40, "event": "table_file_deletion", "file_number": 66}
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.616637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.714256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.714261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.714262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.714264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.714265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 10:07:02 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6892 writes, 36K keys, 6892 commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.04 MB/s
                                           Cumulative WAL: 6892 writes, 6892 syncs, 1.00 writes per sync, written: 0.09 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1550 writes, 8363 keys, 1550 commit groups, 1.0 writes per commit group, ingest: 18.03 MB, 0.03 MB/s
                                           Interval WAL: 1550 writes, 1550 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    143.1      0.39              0.14        20    0.019       0      0       0.0       0.0
                                             L6      1/0   14.02 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.5    165.5    142.0      1.76              0.60        19    0.093    109K    10K       0.0       0.0
                                            Sum      1/0   14.02 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.5    135.7    142.2      2.14              0.73        39    0.055    109K    10K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.2    163.4    164.0      0.50              0.20        10    0.050     34K   3588       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    165.5    142.0      1.76              0.60        19    0.093    109K    10K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    143.8      0.38              0.14        19    0.020       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.054, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.30 GB write, 0.13 MB/s write, 0.28 GB read, 0.12 MB/s read, 2.1 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a5fe7f5350#2 capacity: 304.00 MB usage: 26.72 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000202 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1606,25.86 MB,8.50756%) FilterBlock(39,320.42 KB,0.102932%) IndexBlock(39,553.73 KB,0.17788%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 10:07:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:03.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:03 compute-1 ceph-mon[80009]: pgmap v1209: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:03.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:07:04 compute-1 ceph-mon[80009]: pgmap v1210: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:05.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:07:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:05.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:07:06 compute-1 nova_compute[230010]: 2025-11-24 10:07:06.280 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:06 compute-1 nova_compute[230010]: 2025-11-24 10:07:06.335 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:07.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:07 compute-1 ceph-mon[80009]: pgmap v1211: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:07.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:09.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:07:09 compute-1 ceph-mon[80009]: pgmap v1212: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:09 compute-1 sudo[251869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:07:09 compute-1 sudo[251869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:07:09 compute-1 sudo[251869]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:09.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:11.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:11 compute-1 nova_compute[230010]: 2025-11-24 10:07:11.284 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:11 compute-1 nova_compute[230010]: 2025-11-24 10:07:11.336 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:11 compute-1 ceph-mon[80009]: pgmap v1213: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:12.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:12 compute-1 ceph-mon[80009]: pgmap v1214: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:13.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:14.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:07:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:15.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:07:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:07:15 compute-1 ceph-mon[80009]: pgmap v1215: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:07:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:16.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:16 compute-1 nova_compute[230010]: 2025-11-24 10:07:16.290 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:16 compute-1 nova_compute[230010]: 2025-11-24 10:07:16.341 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:17.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:17 compute-1 ceph-mon[80009]: pgmap v1216: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:18.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:19.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:19 compute-1 sudo[244579]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:19 compute-1 sshd-session[244578]: Received disconnect from 192.168.122.10 port 46982:11: disconnected by user
Nov 24 10:07:19 compute-1 sshd-session[244578]: Disconnected from user zuul 192.168.122.10 port 46982
Nov 24 10:07:19 compute-1 sshd-session[244574]: pam_unix(sshd:session): session closed for user zuul
Nov 24 10:07:19 compute-1 systemd[1]: session-55.scope: Deactivated successfully.
Nov 24 10:07:19 compute-1 systemd[1]: session-55.scope: Consumed 2min 57.987s CPU time, 780.2M memory peak, read 309.6M from disk, written 232.6M to disk.
Nov 24 10:07:19 compute-1 systemd-logind[823]: Session 55 logged out. Waiting for processes to exit.
Nov 24 10:07:19 compute-1 systemd-logind[823]: Removed session 55.
Nov 24 10:07:19 compute-1 sshd-session[251899]: Accepted publickey for zuul from 192.168.122.10 port 54556 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 10:07:19 compute-1 systemd-logind[823]: New session 56 of user zuul.
Nov 24 10:07:19 compute-1 systemd[1]: Started Session 56 of User zuul.
Nov 24 10:07:19 compute-1 sshd-session[251899]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 10:07:19 compute-1 sudo[251909]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-1-2025-11-24-oylbqdx.tar.xz
Nov 24 10:07:19 compute-1 sudo[251909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 10:07:19 compute-1 podman[251902]: 2025-11-24 10:07:19.487551448 +0000 UTC m=+0.086092118 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 10:07:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:07:19 compute-1 sudo[251909]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:19 compute-1 sshd-session[251903]: Received disconnect from 192.168.122.10 port 54556:11: disconnected by user
Nov 24 10:07:19 compute-1 sshd-session[251903]: Disconnected from user zuul 192.168.122.10 port 54556
Nov 24 10:07:19 compute-1 sshd-session[251899]: pam_unix(sshd:session): session closed for user zuul
Nov 24 10:07:19 compute-1 systemd[1]: session-56.scope: Deactivated successfully.
Nov 24 10:07:19 compute-1 systemd-logind[823]: Session 56 logged out. Waiting for processes to exit.
Nov 24 10:07:19 compute-1 systemd-logind[823]: Removed session 56.
Nov 24 10:07:19 compute-1 sudo[251948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:07:19 compute-1 sudo[251948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:07:19 compute-1 sudo[251948]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:19 compute-1 sshd-session[251971]: Accepted publickey for zuul from 192.168.122.10 port 54558 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 10:07:19 compute-1 sudo[251975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:07:19 compute-1 sudo[251975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:07:19 compute-1 systemd-logind[823]: New session 57 of user zuul.
Nov 24 10:07:19 compute-1 ceph-mon[80009]: pgmap v1217: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:19 compute-1 systemd[1]: Started Session 57 of User zuul.
Nov 24 10:07:19 compute-1 sshd-session[251971]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 10:07:19 compute-1 sudo[252002]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Nov 24 10:07:19 compute-1 sudo[252002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 10:07:19 compute-1 sudo[252002]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:19 compute-1 sshd-session[252001]: Received disconnect from 192.168.122.10 port 54558:11: disconnected by user
Nov 24 10:07:19 compute-1 sshd-session[252001]: Disconnected from user zuul 192.168.122.10 port 54558
Nov 24 10:07:19 compute-1 sshd-session[251971]: pam_unix(sshd:session): session closed for user zuul
Nov 24 10:07:19 compute-1 systemd[1]: session-57.scope: Deactivated successfully.
Nov 24 10:07:19 compute-1 systemd-logind[823]: Session 57 logged out. Waiting for processes to exit.
Nov 24 10:07:19 compute-1 systemd-logind[823]: Removed session 57.
Nov 24 10:07:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:07:20 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:07:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:20.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:07:20.071 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:07:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:07:20.071 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:07:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:07:20.071 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:07:20 compute-1 sudo[251975]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:20 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:07:20 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:07:20 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 10:07:20 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:07:20 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:07:20 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:07:20 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 10:07:20 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:07:20 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 10:07:20 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:07:20 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:07:20 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:07:21 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:07:21 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:07:21 compute-1 ceph-mon[80009]: pgmap v1218: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:21 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:07:21 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:07:21 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:07:21 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:07:21 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:07:21 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:07:21 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:07:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:21.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:21 compute-1 nova_compute[230010]: 2025-11-24 10:07:21.295 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:21 compute-1 nova_compute[230010]: 2025-11-24 10:07:21.343 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:21 compute-1 nova_compute[230010]: 2025-11-24 10:07:21.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:21 compute-1 nova_compute[230010]: 2025-11-24 10:07:21.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:22.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:22 compute-1 ceph-mon[80009]: pgmap v1219: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:07:22 compute-1 ceph-mon[80009]: pgmap v1220: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 744 B/s rd, 0 op/s
Nov 24 10:07:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:23.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:23 compute-1 nova_compute[230010]: 2025-11-24 10:07:23.778 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:23 compute-1 nova_compute[230010]: 2025-11-24 10:07:23.778 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:23 compute-1 nova_compute[230010]: 2025-11-24 10:07:23.779 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:07:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:24.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:24 compute-1 ceph-mon[80009]: pgmap v1221: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:07:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:25.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:25 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/297756788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:07:25 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:07:25 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:07:25 compute-1 sudo[252062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:07:25 compute-1 sudo[252062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:07:25 compute-1 sudo[252062]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:25 compute-1 nova_compute[230010]: 2025-11-24 10:07:25.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:25 compute-1 nova_compute[230010]: 2025-11-24 10:07:25.766 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 24 10:07:25 compute-1 nova_compute[230010]: 2025-11-24 10:07:25.820 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 24 10:07:25 compute-1 podman[252086]: 2025-11-24 10:07:25.839935528 +0000 UTC m=+0.161391789 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 10:07:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:26.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:26 compute-1 ceph-mon[80009]: pgmap v1222: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 744 B/s rd, 0 op/s
Nov 24 10:07:26 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1761695698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:07:26 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:07:26 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:07:26 compute-1 nova_compute[230010]: 2025-11-24 10:07:26.298 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:26 compute-1 nova_compute[230010]: 2025-11-24 10:07:26.343 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:26 compute-1 nova_compute[230010]: 2025-11-24 10:07:26.813 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:27.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:27 compute-1 nova_compute[230010]: 2025-11-24 10:07:27.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:28.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:28 compute-1 ceph-mon[80009]: pgmap v1223: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:28 compute-1 nova_compute[230010]: 2025-11-24 10:07:28.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:28 compute-1 nova_compute[230010]: 2025-11-24 10:07:28.789 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:07:28 compute-1 nova_compute[230010]: 2025-11-24 10:07:28.789 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:07:28 compute-1 nova_compute[230010]: 2025-11-24 10:07:28.789 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:07:28 compute-1 nova_compute[230010]: 2025-11-24 10:07:28.789 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:07:28 compute-1 nova_compute[230010]: 2025-11-24 10:07:28.790 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:07:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:29.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:29 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1717278106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:07:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:07:29 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3520934719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:07:29 compute-1 nova_compute[230010]: 2025-11-24 10:07:29.259 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:07:29 compute-1 nova_compute[230010]: 2025-11-24 10:07:29.471 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:07:29 compute-1 nova_compute[230010]: 2025-11-24 10:07:29.472 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4886MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:07:29 compute-1 nova_compute[230010]: 2025-11-24 10:07:29.472 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:07:29 compute-1 nova_compute[230010]: 2025-11-24 10:07:29.472 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:07:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:07:29 compute-1 nova_compute[230010]: 2025-11-24 10:07:29.650 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:07:29 compute-1 nova_compute[230010]: 2025-11-24 10:07:29.651 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:07:29 compute-1 nova_compute[230010]: 2025-11-24 10:07:29.682 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:07:29 compute-1 sudo[252138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:07:29 compute-1 sudo[252138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:07:29 compute-1 sudo[252138]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:30.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:07:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4056666050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:07:30 compute-1 nova_compute[230010]: 2025-11-24 10:07:30.143 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:07:30 compute-1 nova_compute[230010]: 2025-11-24 10:07:30.149 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:07:30 compute-1 nova_compute[230010]: 2025-11-24 10:07:30.168 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:07:30 compute-1 nova_compute[230010]: 2025-11-24 10:07:30.170 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:07:30 compute-1 nova_compute[230010]: 2025-11-24 10:07:30.170 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:07:30 compute-1 ceph-mon[80009]: pgmap v1224: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:30 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3520934719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:07:30 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/4056666050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:07:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:07:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:07:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:31.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:31 compute-1 nova_compute[230010]: 2025-11-24 10:07:31.170 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:31 compute-1 nova_compute[230010]: 2025-11-24 10:07:31.192 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:31 compute-1 nova_compute[230010]: 2025-11-24 10:07:31.193 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:07:31 compute-1 nova_compute[230010]: 2025-11-24 10:07:31.194 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:07:31 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1905884823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:07:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:07:31 compute-1 nova_compute[230010]: 2025-11-24 10:07:31.213 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:07:31 compute-1 nova_compute[230010]: 2025-11-24 10:07:31.214 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:31 compute-1 nova_compute[230010]: 2025-11-24 10:07:31.301 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:31 compute-1 nova_compute[230010]: 2025-11-24 10:07:31.347 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:32.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:32 compute-1 ceph-mon[80009]: pgmap v1225: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:07:32 compute-1 nova_compute[230010]: 2025-11-24 10:07:32.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:33.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:33 compute-1 podman[252187]: 2025-11-24 10:07:33.315455529 +0000 UTC m=+0.050069198 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 10:07:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:34.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:34 compute-1 ceph-mon[80009]: pgmap v1226: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:07:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:35.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:36.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:36 compute-1 ceph-mon[80009]: pgmap v1227: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:36 compute-1 nova_compute[230010]: 2025-11-24 10:07:36.304 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:36 compute-1 nova_compute[230010]: 2025-11-24 10:07:36.348 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.002000048s ======
Nov 24 10:07:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:37.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Nov 24 10:07:37 compute-1 nova_compute[230010]: 2025-11-24 10:07:37.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:37 compute-1 nova_compute[230010]: 2025-11-24 10:07:37.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 24 10:07:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:38.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:38 compute-1 ceph-mon[80009]: pgmap v1228: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:39.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:07:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:40.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:40 compute-1 ceph-mon[80009]: pgmap v1229: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:41.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:41 compute-1 nova_compute[230010]: 2025-11-24 10:07:41.307 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:41 compute-1 nova_compute[230010]: 2025-11-24 10:07:41.350 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:42.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:42 compute-1 ceph-mon[80009]: pgmap v1230: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:43.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:44.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:44 compute-1 ceph-mon[80009]: pgmap v1231: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:07:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:45.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:07:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:07:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:46.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:46 compute-1 nova_compute[230010]: 2025-11-24 10:07:46.311 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:46 compute-1 ceph-mon[80009]: pgmap v1232: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:07:46 compute-1 nova_compute[230010]: 2025-11-24 10:07:46.352 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:47.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:48.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:48 compute-1 ceph-mon[80009]: pgmap v1233: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:48 compute-1 sshd-session[252214]: Invalid user openvpn from 164.92.213.168 port 34128
Nov 24 10:07:48 compute-1 sshd-session[252214]: Connection closed by invalid user openvpn 164.92.213.168 port 34128 [preauth]
Nov 24 10:07:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:49.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:07:49 compute-1 sudo[252216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:07:49 compute-1 sudo[252216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:07:49 compute-1 sudo[252216]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:49 compute-1 podman[252240]: 2025-11-24 10:07:49.925557546 +0000 UTC m=+0.066746750 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 10:07:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:50.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:50 compute-1 ceph-mon[80009]: pgmap v1234: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:51.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:51 compute-1 nova_compute[230010]: 2025-11-24 10:07:51.314 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:51 compute-1 nova_compute[230010]: 2025-11-24 10:07:51.355 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:07:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:52.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:07:52 compute-1 ceph-mon[80009]: pgmap v1235: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:53.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:54.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:54 compute-1 ceph-mon[80009]: pgmap v1236: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:07:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:55.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:56.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:56 compute-1 nova_compute[230010]: 2025-11-24 10:07:56.320 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:56 compute-1 nova_compute[230010]: 2025-11-24 10:07:56.357 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:56 compute-1 podman[252265]: 2025-11-24 10:07:56.405626682 +0000 UTC m=+0.129551320 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:07:56 compute-1 ceph-mon[80009]: pgmap v1237: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:57.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:58.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:58 compute-1 ceph-mon[80009]: pgmap v1238: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 0 B/s wr, 168 op/s
Nov 24 10:07:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:07:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:07:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:59.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:07:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:08:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:00.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:00 compute-1 ceph-mon[80009]: pgmap v1239: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Nov 24 10:08:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:08:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:08:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:01.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:01 compute-1 sshd-session[252293]: Connection closed by 159.65.46.209 port 46538
Nov 24 10:08:01 compute-1 nova_compute[230010]: 2025-11-24 10:08:01.326 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:01 compute-1 nova_compute[230010]: 2025-11-24 10:08:01.358 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:08:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:02.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:02 compute-1 ceph-mon[80009]: pgmap v1240: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Nov 24 10:08:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/2332319195' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:08:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/2332319195' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:08:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:03.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:04.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:04 compute-1 podman[252296]: 2025-11-24 10:08:04.316229622 +0000 UTC m=+0.059673895 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 24 10:08:04 compute-1 ceph-mon[80009]: pgmap v1241: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 0 B/s wr, 168 op/s
Nov 24 10:08:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:08:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:05.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:06.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:06 compute-1 nova_compute[230010]: 2025-11-24 10:08:06.329 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:06 compute-1 nova_compute[230010]: 2025-11-24 10:08:06.360 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:06 compute-1 ceph-mon[80009]: pgmap v1242: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Nov 24 10:08:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:07.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:08.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:08 compute-1 ceph-mon[80009]: pgmap v1243: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 0 B/s wr, 168 op/s
Nov 24 10:08:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:09.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:08:09 compute-1 sshd-session[252318]: Invalid user firedancer from 80.94.92.165 port 56016
Nov 24 10:08:09 compute-1 sshd-session[252318]: Connection closed by invalid user firedancer 80.94.92.165 port 56016 [preauth]
Nov 24 10:08:09 compute-1 sudo[252320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:08:09 compute-1 sudo[252320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:08:09 compute-1 sudo[252320]: pam_unix(sudo:session): session closed for user root
Nov 24 10:08:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:10.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:10 compute-1 ceph-mon[80009]: pgmap v1244: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:11.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:11 compute-1 nova_compute[230010]: 2025-11-24 10:08:11.334 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:11 compute-1 nova_compute[230010]: 2025-11-24 10:08:11.363 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:12.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:12 compute-1 ceph-mon[80009]: pgmap v1245: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:13.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:14.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:08:14 compute-1 ceph-mon[80009]: pgmap v1246: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:08:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:15.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:08:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:08:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:08:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:16.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:16 compute-1 nova_compute[230010]: 2025-11-24 10:08:16.338 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:16 compute-1 nova_compute[230010]: 2025-11-24 10:08:16.365 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:16 compute-1 ceph-mon[80009]: pgmap v1247: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:17.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:17 compute-1 ceph-mon[80009]: pgmap v1248: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:08:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:18.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:19.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:08:19 compute-1 ceph-mon[80009]: pgmap v1249: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:08:20.074 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:08:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:08:20.074 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:08:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:08:20.074 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:08:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:20.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:20 compute-1 podman[252351]: 2025-11-24 10:08:20.340635648 +0000 UTC m=+0.074915742 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 10:08:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:21.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:21 compute-1 nova_compute[230010]: 2025-11-24 10:08:21.355 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:21 compute-1 nova_compute[230010]: 2025-11-24 10:08:21.368 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:21 compute-1 ceph-mon[80009]: pgmap v1250: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:21 compute-1 nova_compute[230010]: 2025-11-24 10:08:21.774 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:08:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:22.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:23.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:23 compute-1 nova_compute[230010]: 2025-11-24 10:08:23.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:08:23 compute-1 ceph-mon[80009]: pgmap v1251: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:08:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:24.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:08:24 compute-1 nova_compute[230010]: 2025-11-24 10:08:24.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:08:24 compute-1 nova_compute[230010]: 2025-11-24 10:08:24.766 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:08:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:25.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:25 compute-1 ceph-mon[80009]: pgmap v1252: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:25 compute-1 sudo[252374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:08:25 compute-1 sudo[252374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:08:25 compute-1 sudo[252374]: pam_unix(sudo:session): session closed for user root
Nov 24 10:08:25 compute-1 sudo[252399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Nov 24 10:08:25 compute-1 sudo[252399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:08:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:26.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:26 compute-1 sudo[252399]: pam_unix(sudo:session): session closed for user root
Nov 24 10:08:26 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 10:08:26 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 10:08:26 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 10:08:26 compute-1 nova_compute[230010]: 2025-11-24 10:08:26.358 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:26 compute-1 nova_compute[230010]: 2025-11-24 10:08:26.369 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:26 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 10:08:26 compute-1 sudo[252445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:08:26 compute-1 sudo[252445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:08:26 compute-1 sudo[252445]: pam_unix(sudo:session): session closed for user root
Nov 24 10:08:26 compute-1 sudo[252476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:08:26 compute-1 sudo[252476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:08:26 compute-1 podman[252469]: 2025-11-24 10:08:26.654363452 +0000 UTC m=+0.104712218 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 10:08:27 compute-1 sudo[252476]: pam_unix(sudo:session): session closed for user root
Nov 24 10:08:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:27.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0.
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.271160) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978907271211, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1080, "num_deletes": 251, "total_data_size": 2584771, "memory_usage": 2606440, "flush_reason": "Manual Compaction"}
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978907297700, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 1115589, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36858, "largest_seqno": 37933, "table_properties": {"data_size": 1111576, "index_size": 1665, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10536, "raw_average_key_size": 20, "raw_value_size": 1103012, "raw_average_value_size": 2197, "num_data_blocks": 71, "num_entries": 502, "num_filter_entries": 502, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763978823, "oldest_key_time": 1763978823, "file_creation_time": 1763978907, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 26581 microseconds, and 4161 cpu microseconds.
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:08:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:08:27 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:08:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 10:08:27 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:08:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.297746) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 1115589 bytes OK
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.297770) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.320276) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.320306) EVENT_LOG_v1 {"time_micros": 1763978907320298, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.320338) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2579468, prev total WAL file size 2579468, number of live WAL files 2.
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.321610) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303037' seq:72057594037927935, type:22 .. '6D6772737461740031323539' seq:0, type:0; will stop at (end)
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(1089KB)], [69(14MB)]
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978907321693, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 15812578, "oldest_snapshot_seqno": -1}
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 6567 keys, 12256014 bytes, temperature: kUnknown
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978907382081, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 12256014, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12215273, "index_size": 23221, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16453, "raw_key_size": 172778, "raw_average_key_size": 26, "raw_value_size": 12100073, "raw_average_value_size": 1842, "num_data_blocks": 909, "num_entries": 6567, "num_filter_entries": 6567, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763978907, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.382594) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 12256014 bytes
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.409197) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 261.2 rd, 202.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 14.0 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(25.2) write-amplify(11.0) OK, records in: 7052, records dropped: 485 output_compression: NoCompression
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.409218) EVENT_LOG_v1 {"time_micros": 1763978907409209, "job": 42, "event": "compaction_finished", "compaction_time_micros": 60529, "compaction_time_cpu_micros": 25177, "output_level": 6, "num_output_files": 1, "total_output_size": 12256014, "num_input_records": 7052, "num_output_records": 6567, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978907409666, "job": 42, "event": "table_file_deletion", "file_number": 71}
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978907412127, "job": 42, "event": "table_file_deletion", "file_number": 69}
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.321491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.412248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.412255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.412258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.412260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:08:27 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.412261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:08:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:08:27 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:08:27 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1115580703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:08:27 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:08:27 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:08:27 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:08:27 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1085546858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:08:27 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:08:27 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:08:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 10:08:27 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:08:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 10:08:27 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:08:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:08:27 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:08:27 compute-1 nova_compute[230010]: 2025-11-24 10:08:27.760 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:08:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:28.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:28 compute-1 ceph-mon[80009]: pgmap v1253: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:08:28 compute-1 ceph-mon[80009]: pgmap v1254: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:08:28 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:08:28 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:08:28 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:08:28 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:08:28 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:08:28 compute-1 nova_compute[230010]: 2025-11-24 10:08:28.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:08:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:29.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:08:29 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/820330601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:08:29 compute-1 nova_compute[230010]: 2025-11-24 10:08:29.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:08:29 compute-1 nova_compute[230010]: 2025-11-24 10:08:29.793 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:08:29 compute-1 nova_compute[230010]: 2025-11-24 10:08:29.793 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:08:29 compute-1 nova_compute[230010]: 2025-11-24 10:08:29.793 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:08:29 compute-1 nova_compute[230010]: 2025-11-24 10:08:29.793 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:08:29 compute-1 nova_compute[230010]: 2025-11-24 10:08:29.794 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:08:30 compute-1 sudo[252573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:08:30 compute-1 sudo[252573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:08:30 compute-1 sudo[252573]: pam_unix(sudo:session): session closed for user root
Nov 24 10:08:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:30.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:08:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/463285442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:08:30 compute-1 nova_compute[230010]: 2025-11-24 10:08:30.264 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:08:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:08:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:08:30 compute-1 nova_compute[230010]: 2025-11-24 10:08:30.454 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:08:30 compute-1 nova_compute[230010]: 2025-11-24 10:08:30.455 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4856MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:08:30 compute-1 nova_compute[230010]: 2025-11-24 10:08:30.456 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:08:30 compute-1 nova_compute[230010]: 2025-11-24 10:08:30.456 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:08:30 compute-1 ceph-mon[80009]: pgmap v1255: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:08:30 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1198663139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:08:30 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/463285442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:08:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:08:30 compute-1 nova_compute[230010]: 2025-11-24 10:08:30.599 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:08:30 compute-1 nova_compute[230010]: 2025-11-24 10:08:30.599 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:08:30 compute-1 nova_compute[230010]: 2025-11-24 10:08:30.620 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:08:31 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:08:31 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3228508666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:08:31 compute-1 nova_compute[230010]: 2025-11-24 10:08:31.071 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:08:31 compute-1 nova_compute[230010]: 2025-11-24 10:08:31.078 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:08:31 compute-1 nova_compute[230010]: 2025-11-24 10:08:31.094 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:08:31 compute-1 nova_compute[230010]: 2025-11-24 10:08:31.097 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:08:31 compute-1 nova_compute[230010]: 2025-11-24 10:08:31.097 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:08:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:31.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:31 compute-1 nova_compute[230010]: 2025-11-24 10:08:31.362 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:31 compute-1 nova_compute[230010]: 2025-11-24 10:08:31.373 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:31 compute-1 nova_compute[230010]: 2025-11-24 10:08:31.498 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:08:31 compute-1 nova_compute[230010]: 2025-11-24 10:08:31.499 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:08:31 compute-1 nova_compute[230010]: 2025-11-24 10:08:31.499 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:08:31 compute-1 nova_compute[230010]: 2025-11-24 10:08:31.517 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:08:31 compute-1 nova_compute[230010]: 2025-11-24 10:08:31.517 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:08:31 compute-1 nova_compute[230010]: 2025-11-24 10:08:31.519 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:08:31 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3228508666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:08:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:08:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:08:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:32.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:32 compute-1 sudo[252624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:08:32 compute-1 sudo[252624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:08:32 compute-1 sudo[252624]: pam_unix(sudo:session): session closed for user root
Nov 24 10:08:32 compute-1 ceph-mon[80009]: pgmap v1256: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:08:32 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:08:32 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:08:32 compute-1 nova_compute[230010]: 2025-11-24 10:08:32.779 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:08:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:33.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:34.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:08:34 compute-1 ceph-mon[80009]: pgmap v1257: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:08:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:35.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:35 compute-1 podman[252650]: 2025-11-24 10:08:35.318540264 +0000 UTC m=+0.056500386 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 10:08:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:36.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:36 compute-1 nova_compute[230010]: 2025-11-24 10:08:36.365 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:36 compute-1 nova_compute[230010]: 2025-11-24 10:08:36.374 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:36 compute-1 ceph-mon[80009]: pgmap v1258: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:08:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:37.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:37 compute-1 ceph-mon[80009]: pgmap v1259: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:08:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:38.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:39.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:08:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:40.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:40 compute-1 ceph-mon[80009]: pgmap v1260: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:41.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:41 compute-1 nova_compute[230010]: 2025-11-24 10:08:41.367 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:41 compute-1 nova_compute[230010]: 2025-11-24 10:08:41.377 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:42.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:42 compute-1 ceph-mon[80009]: pgmap v1261: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:43.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:44.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:44 compute-1 ceph-mon[80009]: pgmap v1262: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:08:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:08:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:45.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:08:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:08:45 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:08:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:46.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:46 compute-1 nova_compute[230010]: 2025-11-24 10:08:46.371 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:46 compute-1 nova_compute[230010]: 2025-11-24 10:08:46.379 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:46 compute-1 ceph-mon[80009]: pgmap v1263: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:47.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:48.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:48 compute-1 ceph-mon[80009]: pgmap v1264: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:08:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:49.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:08:50 compute-1 sudo[252678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:08:50 compute-1 sudo[252678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:08:50 compute-1 sudo[252678]: pam_unix(sudo:session): session closed for user root
Nov 24 10:08:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:50.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:50 compute-1 ceph-mon[80009]: pgmap v1265: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:51.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:51 compute-1 podman[252703]: 2025-11-24 10:08:51.373546707 +0000 UTC m=+0.106872051 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 24 10:08:51 compute-1 nova_compute[230010]: 2025-11-24 10:08:51.374 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:51 compute-1 nova_compute[230010]: 2025-11-24 10:08:51.381 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:52.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:52 compute-1 ceph-mon[80009]: pgmap v1266: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:53.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:54.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:08:54 compute-1 ceph-mon[80009]: pgmap v1267: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:08:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:55.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:56.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:56 compute-1 nova_compute[230010]: 2025-11-24 10:08:56.377 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:56 compute-1 nova_compute[230010]: 2025-11-24 10:08:56.383 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:56 compute-1 ceph-mon[80009]: pgmap v1268: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:57.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:57 compute-1 podman[252728]: 2025-11-24 10:08:57.417093865 +0000 UTC m=+0.157533741 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 10:08:57 compute-1 ceph-mon[80009]: pgmap v1269: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:08:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:58.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:08:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:59.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:09:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:00.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:00 compute-1 ceph-mon[80009]: pgmap v1270: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:09:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:09:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:01.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:01 compute-1 nova_compute[230010]: 2025-11-24 10:09:01.381 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:01 compute-1 nova_compute[230010]: 2025-11-24 10:09:01.384 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:09:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:02.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:02 compute-1 ceph-mon[80009]: pgmap v1271: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/4252246755' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:09:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/4252246755' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:09:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:03.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:04.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:04 compute-1 ceph-mon[80009]: pgmap v1272: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:09:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:05.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:06.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:06 compute-1 podman[252759]: 2025-11-24 10:09:06.357706259 +0000 UTC m=+0.095323055 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:09:06 compute-1 nova_compute[230010]: 2025-11-24 10:09:06.385 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:09:06 compute-1 nova_compute[230010]: 2025-11-24 10:09:06.388 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:06 compute-1 ceph-mon[80009]: pgmap v1273: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:07.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:08.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:08 compute-1 ceph-mon[80009]: pgmap v1274: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:09.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:09:10 compute-1 sudo[252780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:09:10 compute-1 sudo[252780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:09:10 compute-1 sudo[252780]: pam_unix(sudo:session): session closed for user root
Nov 24 10:09:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:10.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:10 compute-1 ceph-mon[80009]: pgmap v1275: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:11.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:11 compute-1 nova_compute[230010]: 2025-11-24 10:09:11.389 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:11 compute-1 nova_compute[230010]: 2025-11-24 10:09:11.391 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:09:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:12.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:12 compute-1 ceph-mon[80009]: pgmap v1276: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:13.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:14.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:09:14 compute-1 ceph-mon[80009]: pgmap v1277: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:09:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:15.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:09:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:09:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:09:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:09:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:16.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:16 compute-1 nova_compute[230010]: 2025-11-24 10:09:16.391 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:09:16 compute-1 nova_compute[230010]: 2025-11-24 10:09:16.392 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:09:16 compute-1 ceph-mon[80009]: pgmap v1278: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:17.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:18.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:18 compute-1 ceph-mon[80009]: pgmap v1279: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:19.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:09:19 compute-1 ceph-mon[80009]: pgmap v1280: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:09:20.075 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:09:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:09:20.076 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:09:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:09:20.076 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:09:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:20.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:21.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:21 compute-1 nova_compute[230010]: 2025-11-24 10:09:21.392 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:09:21 compute-1 nova_compute[230010]: 2025-11-24 10:09:21.394 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:21 compute-1 nova_compute[230010]: 2025-11-24 10:09:21.394 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 24 10:09:21 compute-1 nova_compute[230010]: 2025-11-24 10:09:21.394 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:09:21 compute-1 nova_compute[230010]: 2025-11-24 10:09:21.394 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:09:21 compute-1 nova_compute[230010]: 2025-11-24 10:09:21.396 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:21 compute-1 nova_compute[230010]: 2025-11-24 10:09:21.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:09:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:22.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:22 compute-1 podman[252811]: 2025-11-24 10:09:22.342417548 +0000 UTC m=+0.085833991 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 24 10:09:22 compute-1 ceph-mon[80009]: pgmap v1281: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:23.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:24.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:24 compute-1 ceph-mon[80009]: pgmap v1282: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:09:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:25.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:25 compute-1 nova_compute[230010]: 2025-11-24 10:09:25.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:09:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:26.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:26 compute-1 nova_compute[230010]: 2025-11-24 10:09:26.396 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:26 compute-1 ceph-mon[80009]: pgmap v1283: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:26 compute-1 nova_compute[230010]: 2025-11-24 10:09:26.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:09:26 compute-1 nova_compute[230010]: 2025-11-24 10:09:26.766 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:09:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:27.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:27 compute-1 nova_compute[230010]: 2025-11-24 10:09:27.759 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:09:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:28.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:28 compute-1 podman[252835]: 2025-11-24 10:09:28.355945789 +0000 UTC m=+0.099035847 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 10:09:28 compute-1 ceph-mon[80009]: pgmap v1284: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:28 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3011647413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:09:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:09:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:29.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:09:29 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/801198514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:09:29 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2932132905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:09:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:09:30 compute-1 sudo[252865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:09:30 compute-1 sudo[252865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:09:30 compute-1 sudo[252865]: pam_unix(sudo:session): session closed for user root
Nov 24 10:09:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:30.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:09:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:09:30 compute-1 ceph-mon[80009]: pgmap v1285: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:30 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3812342946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:09:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:09:30 compute-1 nova_compute[230010]: 2025-11-24 10:09:30.759 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:09:30 compute-1 nova_compute[230010]: 2025-11-24 10:09:30.786 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:09:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:31.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:31 compute-1 nova_compute[230010]: 2025-11-24 10:09:31.399 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:09:31 compute-1 nova_compute[230010]: 2025-11-24 10:09:31.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:09:31 compute-1 nova_compute[230010]: 2025-11-24 10:09:31.788 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:09:31 compute-1 nova_compute[230010]: 2025-11-24 10:09:31.789 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:09:31 compute-1 nova_compute[230010]: 2025-11-24 10:09:31.789 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:09:31 compute-1 nova_compute[230010]: 2025-11-24 10:09:31.789 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:09:31 compute-1 nova_compute[230010]: 2025-11-24 10:09:31.790 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:09:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:09:32 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/416098763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:09:32 compute-1 nova_compute[230010]: 2025-11-24 10:09:32.237 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:09:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:32.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:32 compute-1 nova_compute[230010]: 2025-11-24 10:09:32.440 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:09:32 compute-1 nova_compute[230010]: 2025-11-24 10:09:32.441 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4881MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:09:32 compute-1 nova_compute[230010]: 2025-11-24 10:09:32.442 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:09:32 compute-1 nova_compute[230010]: 2025-11-24 10:09:32.442 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:09:32 compute-1 sudo[252913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:09:32 compute-1 sudo[252913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:09:32 compute-1 sudo[252913]: pam_unix(sudo:session): session closed for user root
Nov 24 10:09:32 compute-1 ceph-mon[80009]: pgmap v1286: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:32 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/416098763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:09:32 compute-1 nova_compute[230010]: 2025-11-24 10:09:32.517 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:09:32 compute-1 nova_compute[230010]: 2025-11-24 10:09:32.517 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:09:32 compute-1 nova_compute[230010]: 2025-11-24 10:09:32.545 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing inventories for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 10:09:32 compute-1 sudo[252938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:09:32 compute-1 sudo[252938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:09:32 compute-1 nova_compute[230010]: 2025-11-24 10:09:32.746 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating ProviderTree inventory for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 10:09:32 compute-1 nova_compute[230010]: 2025-11-24 10:09:32.747 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating inventory in ProviderTree for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 10:09:32 compute-1 nova_compute[230010]: 2025-11-24 10:09:32.765 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing aggregate associations for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 10:09:32 compute-1 nova_compute[230010]: 2025-11-24 10:09:32.805 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing trait associations for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e, traits: COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_F16C,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI2,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 10:09:32 compute-1 nova_compute[230010]: 2025-11-24 10:09:32.864 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:09:33 compute-1 sudo[252938]: pam_unix(sudo:session): session closed for user root
Nov 24 10:09:33 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:09:33 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:09:33 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 10:09:33 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:09:33 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:09:33 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:09:33 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 10:09:33 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:09:33 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 10:09:33 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:09:33 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:09:33 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:09:33 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:09:33 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2882460740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:09:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:33.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:33 compute-1 nova_compute[230010]: 2025-11-24 10:09:33.318 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:09:33 compute-1 nova_compute[230010]: 2025-11-24 10:09:33.324 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:09:33 compute-1 nova_compute[230010]: 2025-11-24 10:09:33.356 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:09:33 compute-1 nova_compute[230010]: 2025-11-24 10:09:33.359 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:09:33 compute-1 nova_compute[230010]: 2025-11-24 10:09:33.360 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:09:33 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:09:33 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:09:33 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:09:33 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:09:33 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:09:33 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:09:33 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:09:33 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2882460740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:09:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:34.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:34 compute-1 nova_compute[230010]: 2025-11-24 10:09:34.361 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:09:34 compute-1 nova_compute[230010]: 2025-11-24 10:09:34.361 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:09:34 compute-1 nova_compute[230010]: 2025-11-24 10:09:34.361 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:09:34 compute-1 nova_compute[230010]: 2025-11-24 10:09:34.383 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:09:34 compute-1 nova_compute[230010]: 2025-11-24 10:09:34.383 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:09:34 compute-1 ceph-mon[80009]: pgmap v1287: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:09:34 compute-1 nova_compute[230010]: 2025-11-24 10:09:34.768 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:09:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:09:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:35.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:09:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:36.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:36 compute-1 nova_compute[230010]: 2025-11-24 10:09:36.401 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:09:36 compute-1 nova_compute[230010]: 2025-11-24 10:09:36.403 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:36 compute-1 nova_compute[230010]: 2025-11-24 10:09:36.403 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 24 10:09:36 compute-1 nova_compute[230010]: 2025-11-24 10:09:36.403 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:09:36 compute-1 nova_compute[230010]: 2025-11-24 10:09:36.404 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:09:36 compute-1 nova_compute[230010]: 2025-11-24 10:09:36.406 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:36 compute-1 ceph-mon[80009]: pgmap v1288: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:09:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:37.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:37 compute-1 podman[253018]: 2025-11-24 10:09:37.32294631 +0000 UTC m=+0.058194729 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 10:09:37 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:09:37 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:09:38 compute-1 sudo[253038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:09:38 compute-1 sudo[253038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:09:38 compute-1 sudo[253038]: pam_unix(sudo:session): session closed for user root
Nov 24 10:09:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:38.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:38 compute-1 ceph-mon[80009]: pgmap v1289: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:38 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:09:38 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:09:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:39.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:09:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:40.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:40 compute-1 ceph-mon[80009]: pgmap v1290: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:09:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:41.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:41 compute-1 nova_compute[230010]: 2025-11-24 10:09:41.402 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:41 compute-1 nova_compute[230010]: 2025-11-24 10:09:41.406 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:42.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:42 compute-1 ceph-mon[80009]: pgmap v1291: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:09:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:43.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:43 compute-1 ceph-mon[80009]: pgmap v1292: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:44.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:09:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:45.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:09:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:09:46 compute-1 ceph-mon[80009]: pgmap v1293: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:09:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:46.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:46 compute-1 nova_compute[230010]: 2025-11-24 10:09:46.408 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:09:46 compute-1 nova_compute[230010]: 2025-11-24 10:09:46.409 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:09:46 compute-1 nova_compute[230010]: 2025-11-24 10:09:46.409 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 24 10:09:46 compute-1 nova_compute[230010]: 2025-11-24 10:09:46.410 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:09:46 compute-1 nova_compute[230010]: 2025-11-24 10:09:46.448 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:46 compute-1 nova_compute[230010]: 2025-11-24 10:09:46.449 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:09:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:47.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:48 compute-1 sshd-session[253067]: Invalid user wireguard from 164.92.213.168 port 53040
Nov 24 10:09:48 compute-1 sshd-session[253067]: Connection closed by invalid user wireguard 164.92.213.168 port 53040 [preauth]
Nov 24 10:09:48 compute-1 ceph-mon[80009]: pgmap v1294: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:48.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:49.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:09:50 compute-1 ceph-mon[80009]: pgmap v1295: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:50 compute-1 sudo[253071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:09:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:50.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:50 compute-1 sudo[253071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:09:50 compute-1 sudo[253071]: pam_unix(sudo:session): session closed for user root
Nov 24 10:09:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:51.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:51 compute-1 nova_compute[230010]: 2025-11-24 10:09:51.451 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:52 compute-1 ceph-mon[80009]: pgmap v1296: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:52.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:53 compute-1 podman[253097]: 2025-11-24 10:09:53.321150846 +0000 UTC m=+0.059128541 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 24 10:09:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:53.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:54 compute-1 ceph-mon[80009]: pgmap v1297: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:54.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:09:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:55.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:56.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:56 compute-1 ceph-mon[80009]: pgmap v1298: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:56 compute-1 nova_compute[230010]: 2025-11-24 10:09:56.451 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:57.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:58.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:58 compute-1 ceph-mon[80009]: pgmap v1299: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:09:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:59.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:59 compute-1 podman[253121]: 2025-11-24 10:09:59.367268565 +0000 UTC m=+0.109953438 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 24 10:09:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:10:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:00.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:10:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:10:00 compute-1 ceph-mon[80009]: pgmap v1300: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:00 compute-1 ceph-mon[80009]: overall HEALTH_WARN 1 failed cephadm daemon(s)
Nov 24 10:10:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:10:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:01.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:01 compute-1 nova_compute[230010]: 2025-11-24 10:10:01.453 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:10:01 compute-1 nova_compute[230010]: 2025-11-24 10:10:01.455 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:01 compute-1 nova_compute[230010]: 2025-11-24 10:10:01.455 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 24 10:10:01 compute-1 nova_compute[230010]: 2025-11-24 10:10:01.455 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:10:01 compute-1 nova_compute[230010]: 2025-11-24 10:10:01.455 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:10:01 compute-1 nova_compute[230010]: 2025-11-24 10:10:01.456 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:02.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:02 compute-1 ceph-mon[80009]: pgmap v1301: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/983930137' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:10:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/983930137' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:10:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:03.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:04.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:04 compute-1 ceph-mon[80009]: pgmap v1302: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:10:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:05.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:06.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:06 compute-1 nova_compute[230010]: 2025-11-24 10:10:06.456 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:06 compute-1 ceph-mon[80009]: pgmap v1303: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:07.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:08 compute-1 podman[253154]: 2025-11-24 10:10:08.317527999 +0000 UTC m=+0.057757277 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 24 10:10:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:08.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:08 compute-1 ceph-mon[80009]: pgmap v1304: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:09.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:10:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:10.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:10 compute-1 sudo[253174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:10:10 compute-1 sudo[253174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:10:10 compute-1 sudo[253174]: pam_unix(sudo:session): session closed for user root
Nov 24 10:10:10 compute-1 ceph-mon[80009]: pgmap v1305: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:11.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:11 compute-1 nova_compute[230010]: 2025-11-24 10:10:11.457 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:10:11 compute-1 nova_compute[230010]: 2025-11-24 10:10:11.459 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:11 compute-1 nova_compute[230010]: 2025-11-24 10:10:11.459 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 24 10:10:11 compute-1 nova_compute[230010]: 2025-11-24 10:10:11.459 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:10:11 compute-1 nova_compute[230010]: 2025-11-24 10:10:11.460 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:10:11 compute-1 nova_compute[230010]: 2025-11-24 10:10:11.461 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:12.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:12 compute-1 ceph-mon[80009]: pgmap v1306: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:13.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:14.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:10:14 compute-1 ceph-mon[80009]: pgmap v1307: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:15.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:10:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:10:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:10:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:10:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:16.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:10:16 compute-1 nova_compute[230010]: 2025-11-24 10:10:16.460 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:16 compute-1 ceph-mon[80009]: pgmap v1308: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:17.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:18.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:18 compute-1 ceph-mon[80009]: pgmap v1309: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:19.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:10:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:10:20.077 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:10:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:10:20.078 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:10:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:10:20.078 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:10:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:20.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:20 compute-1 ceph-mon[80009]: pgmap v1310: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:21.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:21 compute-1 nova_compute[230010]: 2025-11-24 10:10:21.461 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:21 compute-1 nova_compute[230010]: 2025-11-24 10:10:21.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:10:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:22.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:22 compute-1 ceph-mon[80009]: pgmap v1311: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:23.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:23 compute-1 ceph-mon[80009]: pgmap v1312: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:24 compute-1 nova_compute[230010]: 2025-11-24 10:10:24.103 230014 DEBUG oslo_concurrency.processutils [None req-df03cd5c-5660-4536-b19c-eb403e13ec09 1498c4791c234bc884ea0fabb778d239 cf636babb68a4ebe9bf137d3fe0e4c0c - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:10:24 compute-1 nova_compute[230010]: 2025-11-24 10:10:24.132 230014 DEBUG oslo_concurrency.processutils [None req-df03cd5c-5660-4536-b19c-eb403e13ec09 1498c4791c234bc884ea0fabb778d239 cf636babb68a4ebe9bf137d3fe0e4c0c - - default default] CMD "env LANG=C uptime" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:10:24 compute-1 podman[253207]: 2025-11-24 10:10:24.396178748 +0000 UTC m=+0.130573282 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 24 10:10:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:24.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:10:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:25.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:25 compute-1 nova_compute[230010]: 2025-11-24 10:10:25.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:10:26 compute-1 ceph-mon[80009]: pgmap v1313: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:26.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:26 compute-1 nova_compute[230010]: 2025-11-24 10:10:26.462 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:27.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:27 compute-1 nova_compute[230010]: 2025-11-24 10:10:27.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:10:27 compute-1 nova_compute[230010]: 2025-11-24 10:10:27.766 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:10:28 compute-1 ceph-mon[80009]: pgmap v1314: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:28.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:29 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3190216163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:10:29 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1476322504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:10:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:10:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:29.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:10:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:10:29 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:10:29.699 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 10:10:29 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:10:29.700 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 10:10:29 compute-1 nova_compute[230010]: 2025-11-24 10:10:29.701 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:29 compute-1 nova_compute[230010]: 2025-11-24 10:10:29.760 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:10:30 compute-1 ceph-mon[80009]: pgmap v1315: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:30 compute-1 podman[253231]: 2025-11-24 10:10:30.420503831 +0000 UTC m=+0.149001255 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:10:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:10:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:10:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.002000047s ======
Nov 24 10:10:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:30.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Nov 24 10:10:30 compute-1 sudo[253257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:10:30 compute-1 sudo[253257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:10:30 compute-1 sudo[253257]: pam_unix(sudo:session): session closed for user root
Nov 24 10:10:30 compute-1 nova_compute[230010]: 2025-11-24 10:10:30.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:10:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:10:31 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1545364565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:10:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:31.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:31 compute-1 nova_compute[230010]: 2025-11-24 10:10:31.463 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:31 compute-1 nova_compute[230010]: 2025-11-24 10:10:31.466 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:32 compute-1 ceph-mon[80009]: pgmap v1316: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:32 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2150094402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:10:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:32.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:32 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:10:32.702 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:10:32 compute-1 nova_compute[230010]: 2025-11-24 10:10:32.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:10:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:33.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:33 compute-1 nova_compute[230010]: 2025-11-24 10:10:33.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:10:33 compute-1 nova_compute[230010]: 2025-11-24 10:10:33.792 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:10:33 compute-1 nova_compute[230010]: 2025-11-24 10:10:33.792 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:10:33 compute-1 nova_compute[230010]: 2025-11-24 10:10:33.792 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:10:33 compute-1 nova_compute[230010]: 2025-11-24 10:10:33.793 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:10:33 compute-1 nova_compute[230010]: 2025-11-24 10:10:33.793 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:10:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:10:34 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4143398664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:10:34 compute-1 nova_compute[230010]: 2025-11-24 10:10:34.240 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:10:34 compute-1 ceph-mon[80009]: pgmap v1317: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:34 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/4143398664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:10:34 compute-1 nova_compute[230010]: 2025-11-24 10:10:34.405 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:10:34 compute-1 nova_compute[230010]: 2025-11-24 10:10:34.406 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4877MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:10:34 compute-1 nova_compute[230010]: 2025-11-24 10:10:34.406 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:10:34 compute-1 nova_compute[230010]: 2025-11-24 10:10:34.406 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:10:34 compute-1 nova_compute[230010]: 2025-11-24 10:10:34.457 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:10:34 compute-1 nova_compute[230010]: 2025-11-24 10:10:34.458 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:10:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:34.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:34 compute-1 nova_compute[230010]: 2025-11-24 10:10:34.476 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:10:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:10:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:10:34 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2672638891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:10:34 compute-1 nova_compute[230010]: 2025-11-24 10:10:34.943 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:10:34 compute-1 nova_compute[230010]: 2025-11-24 10:10:34.948 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:10:34 compute-1 nova_compute[230010]: 2025-11-24 10:10:34.962 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:10:34 compute-1 nova_compute[230010]: 2025-11-24 10:10:34.963 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:10:34 compute-1 nova_compute[230010]: 2025-11-24 10:10:34.964 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:10:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:35.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:35 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2672638891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:10:35 compute-1 nova_compute[230010]: 2025-11-24 10:10:35.964 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:10:35 compute-1 nova_compute[230010]: 2025-11-24 10:10:35.965 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:10:35 compute-1 nova_compute[230010]: 2025-11-24 10:10:35.965 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:10:35 compute-1 nova_compute[230010]: 2025-11-24 10:10:35.980 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:10:35 compute-1 nova_compute[230010]: 2025-11-24 10:10:35.982 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:10:36 compute-1 ceph-mon[80009]: pgmap v1318: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0.
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.429704) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979036429968, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1515, "num_deletes": 251, "total_data_size": 3700428, "memory_usage": 3776800, "flush_reason": "Manual Compaction"}
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979036446634, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 2415944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37938, "largest_seqno": 39448, "table_properties": {"data_size": 2409563, "index_size": 3580, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13829, "raw_average_key_size": 20, "raw_value_size": 2396602, "raw_average_value_size": 3488, "num_data_blocks": 156, "num_entries": 687, "num_filter_entries": 687, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763978907, "oldest_key_time": 1763978907, "file_creation_time": 1763979036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 16996 microseconds, and 11463 cpu microseconds.
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.446699) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 2415944 bytes OK
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.446729) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.448972) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.449040) EVENT_LOG_v1 {"time_micros": 1763979036449026, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.449071) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 3693352, prev total WAL file size 3693352, number of live WAL files 2.
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.450609) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(2359KB)], [72(11MB)]
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979036450667, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 14671958, "oldest_snapshot_seqno": -1}
Nov 24 10:10:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:36 compute-1 nova_compute[230010]: 2025-11-24 10:10:36.489 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:36.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 6738 keys, 12552617 bytes, temperature: kUnknown
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979036534768, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 12552617, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12510495, "index_size": 24154, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16901, "raw_key_size": 177095, "raw_average_key_size": 26, "raw_value_size": 12392045, "raw_average_value_size": 1839, "num_data_blocks": 946, "num_entries": 6738, "num_filter_entries": 6738, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763979036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.535228) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12552617 bytes
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.536881) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.2 rd, 149.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 11.7 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(11.3) write-amplify(5.2) OK, records in: 7254, records dropped: 516 output_compression: NoCompression
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.536953) EVENT_LOG_v1 {"time_micros": 1763979036536926, "job": 44, "event": "compaction_finished", "compaction_time_micros": 84230, "compaction_time_cpu_micros": 51362, "output_level": 6, "num_output_files": 1, "total_output_size": 12552617, "num_input_records": 7254, "num_output_records": 6738, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979036538221, "job": 44, "event": "table_file_deletion", "file_number": 74}
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979036542798, "job": 44, "event": "table_file_deletion", "file_number": 72}
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.450517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.543054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.543063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.543067) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.543070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:10:36 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.543073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:10:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:37.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:38 compute-1 sudo[253330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:10:38 compute-1 sudo[253330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:10:38 compute-1 sudo[253330]: pam_unix(sudo:session): session closed for user root
Nov 24 10:10:38 compute-1 ceph-mon[80009]: pgmap v1319: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:38 compute-1 sudo[253356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:10:38 compute-1 sudo[253356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:10:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:38.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:38 compute-1 podman[253354]: 2025-11-24 10:10:38.507367494 +0000 UTC m=+0.073876012 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 24 10:10:39 compute-1 sudo[253356]: pam_unix(sudo:session): session closed for user root
Nov 24 10:10:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 24 10:10:39 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 10:10:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:10:39 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:10:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 10:10:39 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:10:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:10:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:10:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 10:10:39 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:10:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 10:10:39 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:10:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:10:39 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:10:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:39.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:39 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 10:10:39 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 10:10:39 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:10:39 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:10:39 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:10:39 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:10:39 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:10:39 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:10:39 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:10:39 compute-1 sshd-session[253417]: Invalid user cardano from 80.94.92.165 port 58710
Nov 24 10:10:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:10:39 compute-1 sshd-session[253417]: Connection closed by invalid user cardano 80.94.92.165 port 58710 [preauth]
Nov 24 10:10:40 compute-1 ceph-mon[80009]: pgmap v1320: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:10:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:40.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:41.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:41 compute-1 nova_compute[230010]: 2025-11-24 10:10:41.490 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:41 compute-1 nova_compute[230010]: 2025-11-24 10:10:41.493 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:42 compute-1 ceph-mon[80009]: pgmap v1321: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:10:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:42.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:43.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:10:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:10:44 compute-1 sudo[253434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:10:44 compute-1 sudo[253434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:10:44 compute-1 sudo[253434]: pam_unix(sudo:session): session closed for user root
Nov 24 10:10:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:44.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:44 compute-1 ceph-mon[80009]: pgmap v1322: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:44 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:10:44 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:10:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:10:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:45.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:10:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:10:45 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:10:46 compute-1 nova_compute[230010]: 2025-11-24 10:10:46.493 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:46.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:46 compute-1 ceph-mon[80009]: pgmap v1323: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:10:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:47.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:48.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:48 compute-1 ceph-mon[80009]: pgmap v1324: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:49.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:10:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:50.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:50 compute-1 ceph-mon[80009]: pgmap v1325: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:10:50 compute-1 sudo[253462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:10:50 compute-1 sudo[253462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:10:50 compute-1 sudo[253462]: pam_unix(sudo:session): session closed for user root
Nov 24 10:10:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:51.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:51 compute-1 nova_compute[230010]: 2025-11-24 10:10:51.496 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:10:51 compute-1 nova_compute[230010]: 2025-11-24 10:10:51.498 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:10:51 compute-1 nova_compute[230010]: 2025-11-24 10:10:51.499 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 24 10:10:51 compute-1 nova_compute[230010]: 2025-11-24 10:10:51.499 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:10:51 compute-1 nova_compute[230010]: 2025-11-24 10:10:51.529 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:51 compute-1 nova_compute[230010]: 2025-11-24 10:10:51.530 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:10:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:52.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:52 compute-1 ceph-mon[80009]: pgmap v1326: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:53.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:54.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:10:54 compute-1 ceph-mon[80009]: pgmap v1327: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:55 compute-1 podman[253489]: 2025-11-24 10:10:55.371674454 +0000 UTC m=+0.106161514 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 10:10:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:55.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:56.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:56 compute-1 nova_compute[230010]: 2025-11-24 10:10:56.530 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:56 compute-1 ceph-mon[80009]: pgmap v1328: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:57.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:58.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:58 compute-1 ceph-mon[80009]: pgmap v1329: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:10:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.002000047s ======
Nov 24 10:10:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:59.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Nov 24 10:10:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:11:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:11:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:11:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:00.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:01 compute-1 ceph-mon[80009]: pgmap v1330: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:11:01 compute-1 podman[253512]: 2025-11-24 10:11:01.422264253 +0000 UTC m=+0.151461704 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 24 10:11:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:01.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:01 compute-1 nova_compute[230010]: 2025-11-24 10:11:01.532 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:11:01 compute-1 nova_compute[230010]: 2025-11-24 10:11:01.534 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:01 compute-1 nova_compute[230010]: 2025-11-24 10:11:01.534 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 24 10:11:01 compute-1 nova_compute[230010]: 2025-11-24 10:11:01.534 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:11:01 compute-1 nova_compute[230010]: 2025-11-24 10:11:01.535 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:11:01 compute-1 nova_compute[230010]: 2025-11-24 10:11:01.538 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:01 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 10:11:01 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 10:11:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 10:11:01 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4142673402' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:11:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 10:11:01 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4142673402' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:11:02 compute-1 ceph-mon[80009]: pgmap v1331: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/4142673402' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:11:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/4142673402' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:11:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:02.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:03.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:04 compute-1 ceph-mon[80009]: pgmap v1332: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:04.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:11:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:05.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:06 compute-1 ceph-mon[80009]: pgmap v1333: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:06 compute-1 nova_compute[230010]: 2025-11-24 10:11:06.534 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:06 compute-1 nova_compute[230010]: 2025-11-24 10:11:06.539 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:06.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:07.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:08 compute-1 ceph-mon[80009]: pgmap v1334: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:08.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:09 compute-1 podman[253544]: 2025-11-24 10:11:09.335198773 +0000 UTC m=+0.063026647 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent)
Nov 24 10:11:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:09.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:11:10 compute-1 ceph-mon[80009]: pgmap v1335: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:10.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:10 compute-1 sudo[253565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:11:10 compute-1 sudo[253565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:11:10 compute-1 sudo[253565]: pam_unix(sudo:session): session closed for user root
Nov 24 10:11:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:11.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:11 compute-1 nova_compute[230010]: 2025-11-24 10:11:11.538 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:12 compute-1 ceph-mon[80009]: pgmap v1336: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:12.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:11:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:13.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:11:14 compute-1 ceph-mon[80009]: pgmap v1337: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:14.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:11:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:11:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:11:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:11:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:15.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:11:16 compute-1 ceph-mon[80009]: pgmap v1338: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:11:16 compute-1 nova_compute[230010]: 2025-11-24 10:11:16.541 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:16.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:17.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:18 compute-1 ceph-mon[80009]: pgmap v1339: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:18.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:11:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:19.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:11:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:11:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:11:20.079 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:11:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:11:20.079 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:11:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:11:20.080 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:11:20 compute-1 ceph-mon[80009]: pgmap v1340: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:20.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:21.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:21 compute-1 nova_compute[230010]: 2025-11-24 10:11:21.541 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:21 compute-1 nova_compute[230010]: 2025-11-24 10:11:21.544 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:22 compute-1 ceph-mon[80009]: pgmap v1341: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:22.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:22 compute-1 nova_compute[230010]: 2025-11-24 10:11:22.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:11:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:23.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:24 compute-1 ceph-mon[80009]: pgmap v1342: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:11:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:24.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:25.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:25 compute-1 nova_compute[230010]: 2025-11-24 10:11:25.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:11:26 compute-1 podman[253598]: 2025-11-24 10:11:26.327957522 +0000 UTC m=+0.065961699 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 24 10:11:26 compute-1 ceph-mon[80009]: pgmap v1343: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:26 compute-1 nova_compute[230010]: 2025-11-24 10:11:26.543 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:26.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:27.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:27 compute-1 nova_compute[230010]: 2025-11-24 10:11:27.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:11:27 compute-1 nova_compute[230010]: 2025-11-24 10:11:27.766 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:11:28 compute-1 ceph-mon[80009]: pgmap v1344: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:28 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/4194509216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:11:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:28.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:11:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:29.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:11:29 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3357223504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:11:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:11:29 compute-1 nova_compute[230010]: 2025-11-24 10:11:29.760 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:11:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:11:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:11:30 compute-1 ceph-mon[80009]: pgmap v1345: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:11:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:30.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:30 compute-1 sudo[253620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:11:30 compute-1 sudo[253620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:11:30 compute-1 sudo[253620]: pam_unix(sudo:session): session closed for user root
Nov 24 10:11:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:31.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:31 compute-1 nova_compute[230010]: 2025-11-24 10:11:31.547 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:11:32 compute-1 podman[253646]: 2025-11-24 10:11:32.390376821 +0000 UTC m=+0.120760671 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 24 10:11:32 compute-1 ceph-mon[80009]: pgmap v1346: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:32.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:32 compute-1 nova_compute[230010]: 2025-11-24 10:11:32.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:11:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:33.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:33 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1801012329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:11:33 compute-1 nova_compute[230010]: 2025-11-24 10:11:33.760 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:11:33 compute-1 nova_compute[230010]: 2025-11-24 10:11:33.777 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:11:34 compute-1 ceph-mon[80009]: pgmap v1347: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:34 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/451229042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:11:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:11:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:34.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:11:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:35.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:11:35 compute-1 nova_compute[230010]: 2025-11-24 10:11:35.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:11:35 compute-1 nova_compute[230010]: 2025-11-24 10:11:35.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:11:35 compute-1 nova_compute[230010]: 2025-11-24 10:11:35.792 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:11:35 compute-1 nova_compute[230010]: 2025-11-24 10:11:35.793 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:11:35 compute-1 nova_compute[230010]: 2025-11-24 10:11:35.793 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:11:35 compute-1 nova_compute[230010]: 2025-11-24 10:11:35.793 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:11:35 compute-1 nova_compute[230010]: 2025-11-24 10:11:35.794 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:11:36 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:11:36 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3225313100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:11:36 compute-1 nova_compute[230010]: 2025-11-24 10:11:36.272 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:11:36 compute-1 nova_compute[230010]: 2025-11-24 10:11:36.427 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:11:36 compute-1 nova_compute[230010]: 2025-11-24 10:11:36.428 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4869MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:11:36 compute-1 nova_compute[230010]: 2025-11-24 10:11:36.429 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:11:36 compute-1 nova_compute[230010]: 2025-11-24 10:11:36.429 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:11:36 compute-1 nova_compute[230010]: 2025-11-24 10:11:36.478 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:11:36 compute-1 nova_compute[230010]: 2025-11-24 10:11:36.479 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:11:36 compute-1 nova_compute[230010]: 2025-11-24 10:11:36.495 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:11:36 compute-1 nova_compute[230010]: 2025-11-24 10:11:36.549 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:36 compute-1 ceph-mon[80009]: pgmap v1348: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:36 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3225313100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:11:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:36.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:36 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:11:36 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2059603497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:11:36 compute-1 nova_compute[230010]: 2025-11-24 10:11:36.987 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:11:36 compute-1 nova_compute[230010]: 2025-11-24 10:11:36.993 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:11:37 compute-1 nova_compute[230010]: 2025-11-24 10:11:37.009 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:11:37 compute-1 nova_compute[230010]: 2025-11-24 10:11:37.011 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:11:37 compute-1 nova_compute[230010]: 2025-11-24 10:11:37.012 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:11:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:37.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:37 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2059603497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:11:38 compute-1 nova_compute[230010]: 2025-11-24 10:11:38.012 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:11:38 compute-1 nova_compute[230010]: 2025-11-24 10:11:38.012 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:11:38 compute-1 nova_compute[230010]: 2025-11-24 10:11:38.012 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:11:38 compute-1 nova_compute[230010]: 2025-11-24 10:11:38.027 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:11:38 compute-1 ceph-mon[80009]: pgmap v1349: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:38.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:39.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:11:40 compute-1 podman[253720]: 2025-11-24 10:11:40.326161132 +0000 UTC m=+0.065418715 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 24 10:11:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:40.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:40 compute-1 ceph-mon[80009]: pgmap v1350: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:41 compute-1 sshd-session[253740]: Invalid user 1111 from 164.92.213.168 port 52468
Nov 24 10:11:41 compute-1 sshd-session[253740]: Connection closed by invalid user 1111 164.92.213.168 port 52468 [preauth]
Nov 24 10:11:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:11:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:41.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:11:41 compute-1 nova_compute[230010]: 2025-11-24 10:11:41.553 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:11:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:42.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:42 compute-1 ceph-mon[80009]: pgmap v1351: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:43.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:11:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:44.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:44 compute-1 sudo[253744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:11:44 compute-1 sudo[253744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:11:44 compute-1 sudo[253744]: pam_unix(sudo:session): session closed for user root
Nov 24 10:11:44 compute-1 ceph-mon[80009]: pgmap v1352: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:44 compute-1 sudo[253769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 24 10:11:44 compute-1 sudo[253769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:11:45 compute-1 podman[253869]: 2025-11-24 10:11:45.386674407 +0000 UTC m=+0.086573322 container exec fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:11:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:11:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:11:45 compute-1 podman[253869]: 2025-11-24 10:11:45.487631033 +0000 UTC m=+0.187529958 container exec_died fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 10:11:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:45.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:45 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:11:46 compute-1 podman[254007]: 2025-11-24 10:11:46.103876332 +0000 UTC m=+0.063283973 container exec 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 10:11:46 compute-1 podman[254007]: 2025-11-24 10:11:46.116916471 +0000 UTC m=+0.076324122 container exec_died 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 10:11:46 compute-1 nova_compute[230010]: 2025-11-24 10:11:46.555 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:11:46 compute-1 podman[254126]: 2025-11-24 10:11:46.618452307 +0000 UTC m=+0.062195665 container exec 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 10:11:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:46.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:46 compute-1 podman[254126]: 2025-11-24 10:11:46.627033498 +0000 UTC m=+0.070776836 container exec_died 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 10:11:46 compute-1 ceph-mon[80009]: pgmap v1353: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:46 compute-1 podman[254193]: 2025-11-24 10:11:46.86613563 +0000 UTC m=+0.071703489 container exec b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, architecture=x86_64, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., version=2.2.4, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, vcs-type=git, io.buildah.version=1.28.2, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9)
Nov 24 10:11:46 compute-1 podman[254193]: 2025-11-24 10:11:46.876198067 +0000 UTC m=+0.081765906 container exec_died b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, version=2.2.4, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=keepalived, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, vcs-type=git, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20)
Nov 24 10:11:46 compute-1 sudo[253769]: pam_unix(sudo:session): session closed for user root
Nov 24 10:11:46 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 10:11:46 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 10:11:47 compute-1 sudo[254225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:11:47 compute-1 sudo[254225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:11:47 compute-1 sudo[254225]: pam_unix(sudo:session): session closed for user root
Nov 24 10:11:47 compute-1 sudo[254250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:11:47 compute-1 sudo[254250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:11:47 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 10:11:47 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 10:11:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:47.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:47 compute-1 sudo[254250]: pam_unix(sudo:session): session closed for user root
Nov 24 10:11:47 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 24 10:11:47 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 10:11:47 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:11:47 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:11:47 compute-1 ceph-mon[80009]: pgmap v1354: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:47 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:11:47 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:11:47 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 10:11:47 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 10:11:48 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 24 10:11:48 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 10:11:48 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:11:48 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:11:48 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 10:11:48 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:11:48 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:11:48 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:11:48 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 10:11:48 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:11:48 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 10:11:48 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:11:48 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:11:48 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:11:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:48.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:49 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 10:11:49 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 10:11:49 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:11:49 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:11:49 compute-1 ceph-mon[80009]: pgmap v1355: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:11:49 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:11:49 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:11:49 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:11:49 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:11:49 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:11:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:49.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:11:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:50.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:50 compute-1 sudo[254308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:11:50 compute-1 sudo[254308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:11:50 compute-1 sudo[254308]: pam_unix(sudo:session): session closed for user root
Nov 24 10:11:51 compute-1 ceph-mon[80009]: pgmap v1356: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:11:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:51.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:51 compute-1 nova_compute[230010]: 2025-11-24 10:11:51.604 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:11:51 compute-1 nova_compute[230010]: 2025-11-24 10:11:51.605 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:11:51 compute-1 nova_compute[230010]: 2025-11-24 10:11:51.605 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5048 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 24 10:11:51 compute-1 nova_compute[230010]: 2025-11-24 10:11:51.606 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:11:51 compute-1 nova_compute[230010]: 2025-11-24 10:11:51.607 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:51 compute-1 nova_compute[230010]: 2025-11-24 10:11:51.607 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:11:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:52.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:53 compute-1 ceph-mon[80009]: pgmap v1357: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Nov 24 10:11:53 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:11:53 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:11:53 compute-1 sudo[254334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:11:53 compute-1 sudo[254334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:11:53 compute-1 sudo[254334]: pam_unix(sudo:session): session closed for user root
Nov 24 10:11:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:53.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:54 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:11:54 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:11:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:11:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:54.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:55 compute-1 ceph-mon[80009]: pgmap v1358: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:11:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:55.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:56 compute-1 nova_compute[230010]: 2025-11-24 10:11:56.608 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:11:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:56.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:57 compute-1 podman[254361]: 2025-11-24 10:11:57.383862895 +0000 UTC m=+0.102152425 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 10:11:57 compute-1 ceph-mon[80009]: pgmap v1359: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:11:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:11:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:57.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:11:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:58.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:59 compute-1 ceph-mon[80009]: pgmap v1360: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:11:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:11:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:59.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:12:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:12:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:12:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:00.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:12:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:01.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:12:02 compute-1 nova_compute[230010]: 2025-11-24 10:12:02.024 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:02 compute-1 ceph-mon[80009]: pgmap v1361: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:02 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:12:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:02.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:03 compute-1 ceph-mon[80009]: pgmap v1362: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:03 compute-1 podman[254386]: 2025-11-24 10:12:03.375057598 +0000 UTC m=+0.119224164 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 10:12:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:12:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:03.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:12:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:12:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:04.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:05 compute-1 ceph-mon[80009]: pgmap v1363: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:12:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:05.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:12:06 compute-1 nova_compute[230010]: 2025-11-24 10:12:06.616 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:06.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:07 compute-1 nova_compute[230010]: 2025-11-24 10:12:07.028 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:07 compute-1 ceph-mon[80009]: pgmap v1364: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:07.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:08.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:09 compute-1 ceph-mon[80009]: pgmap v1365: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:09.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:12:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:10.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:11 compute-1 sudo[254416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:12:11 compute-1 sudo[254416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:12:11 compute-1 sudo[254416]: pam_unix(sudo:session): session closed for user root
Nov 24 10:12:11 compute-1 podman[254440]: 2025-11-24 10:12:11.177917519 +0000 UTC m=+0.066319387 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 24 10:12:11 compute-1 ceph-mon[80009]: pgmap v1366: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:11.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:11 compute-1 nova_compute[230010]: 2025-11-24 10:12:11.664 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:12 compute-1 nova_compute[230010]: 2025-11-24 10:12:12.030 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:12:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:12.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:12:13 compute-1 ceph-mon[80009]: pgmap v1367: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:13.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:12:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:14.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:15 compute-1 ceph-mon[80009]: pgmap v1368: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:12:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:12:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:12:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:15.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:12:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:12:16 compute-1 nova_compute[230010]: 2025-11-24 10:12:16.665 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:16.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:17 compute-1 nova_compute[230010]: 2025-11-24 10:12:17.033 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:17 compute-1 ceph-mon[80009]: pgmap v1369: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:17.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:18.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:19 compute-1 ceph-mon[80009]: pgmap v1370: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:19.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:12:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:12:20.080 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:12:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:12:20.080 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:12:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:12:20.080 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:12:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:12:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:20.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:12:21 compute-1 ceph-mon[80009]: pgmap v1371: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:21.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:21 compute-1 nova_compute[230010]: 2025-11-24 10:12:21.667 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:21 compute-1 nova_compute[230010]: 2025-11-24 10:12:21.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:12:22 compute-1 nova_compute[230010]: 2025-11-24 10:12:22.035 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:22.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:23 compute-1 ceph-mon[80009]: pgmap v1372: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:23.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:12:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:12:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:24.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:12:24 compute-1 nova_compute[230010]: 2025-11-24 10:12:24.775 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:12:25 compute-1 ceph-mon[80009]: pgmap v1373: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:25.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:26 compute-1 nova_compute[230010]: 2025-11-24 10:12:26.671 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:26.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:27 compute-1 nova_compute[230010]: 2025-11-24 10:12:27.037 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:27 compute-1 ceph-mon[80009]: pgmap v1374: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:27.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:27 compute-1 nova_compute[230010]: 2025-11-24 10:12:27.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:12:28 compute-1 podman[254469]: 2025-11-24 10:12:28.326708441 +0000 UTC m=+0.063749074 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 10:12:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:28.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:28 compute-1 nova_compute[230010]: 2025-11-24 10:12:28.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:12:28 compute-1 nova_compute[230010]: 2025-11-24 10:12:28.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:12:29 compute-1 ceph-mon[80009]: pgmap v1375: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:12:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:29.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:12:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:12:29 compute-1 nova_compute[230010]: 2025-11-24 10:12:29.760 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:12:30 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3648104697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:12:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:12:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:12:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:30.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:31 compute-1 sudo[254490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:12:31 compute-1 sudo[254490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:12:31 compute-1 sudo[254490]: pam_unix(sudo:session): session closed for user root
Nov 24 10:12:31 compute-1 ceph-mon[80009]: pgmap v1376: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:12:31 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/574713676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:12:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:31.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:31 compute-1 nova_compute[230010]: 2025-11-24 10:12:31.717 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:32 compute-1 nova_compute[230010]: 2025-11-24 10:12:32.039 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:32.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:32 compute-1 nova_compute[230010]: 2025-11-24 10:12:32.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:12:33 compute-1 ceph-mon[80009]: pgmap v1377: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:12:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:33.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:12:33 compute-1 nova_compute[230010]: 2025-11-24 10:12:33.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:12:34 compute-1 podman[254517]: 2025-11-24 10:12:34.371925907 +0000 UTC m=+0.102708619 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 10:12:34 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1166892128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:12:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:12:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:12:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:34.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:12:35 compute-1 ceph-mon[80009]: pgmap v1378: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:35.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:35 compute-1 nova_compute[230010]: 2025-11-24 10:12:35.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:12:36 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/4225103940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:12:36 compute-1 nova_compute[230010]: 2025-11-24 10:12:36.719 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:36.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:36 compute-1 nova_compute[230010]: 2025-11-24 10:12:36.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:12:36 compute-1 nova_compute[230010]: 2025-11-24 10:12:36.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 24 10:12:36 compute-1 nova_compute[230010]: 2025-11-24 10:12:36.779 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 24 10:12:37 compute-1 nova_compute[230010]: 2025-11-24 10:12:37.040 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:37 compute-1 ceph-mon[80009]: pgmap v1379: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:12:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:37.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:12:37 compute-1 nova_compute[230010]: 2025-11-24 10:12:37.779 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:12:37 compute-1 nova_compute[230010]: 2025-11-24 10:12:37.780 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:12:37 compute-1 nova_compute[230010]: 2025-11-24 10:12:37.780 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:12:37 compute-1 nova_compute[230010]: 2025-11-24 10:12:37.795 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:12:37 compute-1 nova_compute[230010]: 2025-11-24 10:12:37.796 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:12:37 compute-1 nova_compute[230010]: 2025-11-24 10:12:37.824 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:12:37 compute-1 nova_compute[230010]: 2025-11-24 10:12:37.825 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:12:37 compute-1 nova_compute[230010]: 2025-11-24 10:12:37.825 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:12:37 compute-1 nova_compute[230010]: 2025-11-24 10:12:37.825 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:12:37 compute-1 nova_compute[230010]: 2025-11-24 10:12:37.825 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:12:38 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:12:38 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1011159927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:12:38 compute-1 nova_compute[230010]: 2025-11-24 10:12:38.271 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:12:38 compute-1 nova_compute[230010]: 2025-11-24 10:12:38.439 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:12:38 compute-1 nova_compute[230010]: 2025-11-24 10:12:38.440 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4885MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:12:38 compute-1 nova_compute[230010]: 2025-11-24 10:12:38.440 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:12:38 compute-1 nova_compute[230010]: 2025-11-24 10:12:38.441 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:12:38 compute-1 nova_compute[230010]: 2025-11-24 10:12:38.529 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:12:38 compute-1 nova_compute[230010]: 2025-11-24 10:12:38.530 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:12:38 compute-1 nova_compute[230010]: 2025-11-24 10:12:38.547 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:12:38 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1011159927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:12:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:38.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:12:39 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1739914990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:12:39 compute-1 nova_compute[230010]: 2025-11-24 10:12:39.021 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:12:39 compute-1 nova_compute[230010]: 2025-11-24 10:12:39.028 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:12:39 compute-1 nova_compute[230010]: 2025-11-24 10:12:39.046 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:12:39 compute-1 nova_compute[230010]: 2025-11-24 10:12:39.049 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:12:39 compute-1 nova_compute[230010]: 2025-11-24 10:12:39.049 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:12:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:12:39 compute-1 ceph-mon[80009]: pgmap v1380: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:39 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1739914990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:12:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:39.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:39 compute-1 nova_compute[230010]: 2025-11-24 10:12:39.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:12:39 compute-1 nova_compute[230010]: 2025-11-24 10:12:39.767 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 24 10:12:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:12:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:40.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:12:41 compute-1 podman[254590]: 2025-11-24 10:12:41.30820167 +0000 UTC m=+0.049452213 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 10:12:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:12:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:41.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:12:41 compute-1 ceph-mon[80009]: pgmap v1381: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:41 compute-1 nova_compute[230010]: 2025-11-24 10:12:41.721 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:42 compute-1 nova_compute[230010]: 2025-11-24 10:12:42.042 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0.
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.308441) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979162308495, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1515, "num_deletes": 255, "total_data_size": 3762434, "memory_usage": 3808040, "flush_reason": "Manual Compaction"}
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979162322123, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 2458052, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39453, "largest_seqno": 40963, "table_properties": {"data_size": 2451627, "index_size": 3624, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13693, "raw_average_key_size": 19, "raw_value_size": 2438570, "raw_average_value_size": 3539, "num_data_blocks": 156, "num_entries": 689, "num_filter_entries": 689, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763979037, "oldest_key_time": 1763979037, "file_creation_time": 1763979162, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 13726 microseconds, and 5391 cpu microseconds.
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.322173) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 2458052 bytes OK
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.322196) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.323880) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.323893) EVENT_LOG_v1 {"time_micros": 1763979162323890, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.323914) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3755342, prev total WAL file size 3755342, number of live WAL files 2.
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.324914) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303033' seq:72057594037927935, type:22 .. '6C6F676D0031323534' seq:0, type:0; will stop at (end)
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(2400KB)], [75(11MB)]
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979162324945, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 15010669, "oldest_snapshot_seqno": -1}
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 6899 keys, 14849424 bytes, temperature: kUnknown
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979162405366, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 14849424, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14803871, "index_size": 27201, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17285, "raw_key_size": 181377, "raw_average_key_size": 26, "raw_value_size": 14680182, "raw_average_value_size": 2127, "num_data_blocks": 1072, "num_entries": 6899, "num_filter_entries": 6899, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763979162, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.405918) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 14849424 bytes
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.410086) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.1 rd, 184.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 12.0 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(12.1) write-amplify(6.0) OK, records in: 7427, records dropped: 528 output_compression: NoCompression
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.410126) EVENT_LOG_v1 {"time_micros": 1763979162410108, "job": 46, "event": "compaction_finished", "compaction_time_micros": 80640, "compaction_time_cpu_micros": 28525, "output_level": 6, "num_output_files": 1, "total_output_size": 14849424, "num_input_records": 7427, "num_output_records": 6899, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979162411551, "job": 46, "event": "table_file_deletion", "file_number": 77}
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979162416937, "job": 46, "event": "table_file_deletion", "file_number": 75}
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.324832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.417110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.417129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.417132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.417134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:12:42 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.417136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:12:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:42.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:43.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:43 compute-1 ceph-mon[80009]: pgmap v1382: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:12:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:12:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:44.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:12:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:12:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:12:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:45.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:45 compute-1 ceph-mon[80009]: pgmap v1383: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:45 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:12:46 compute-1 nova_compute[230010]: 2025-11-24 10:12:46.724 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:46.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:46 compute-1 ceph-mon[80009]: pgmap v1384: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:47 compute-1 nova_compute[230010]: 2025-11-24 10:12:47.044 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:12:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:47.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:12:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:12:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:48.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:12:49 compute-1 ceph-mon[80009]: pgmap v1385: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:12:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:12:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:49.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:12:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:12:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:50.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:12:51 compute-1 ceph-mon[80009]: pgmap v1386: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:51 compute-1 sudo[254615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:12:51 compute-1 sudo[254615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:12:51 compute-1 sudo[254615]: pam_unix(sudo:session): session closed for user root
Nov 24 10:12:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:12:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:51.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:12:51 compute-1 nova_compute[230010]: 2025-11-24 10:12:51.727 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:52 compute-1 nova_compute[230010]: 2025-11-24 10:12:52.046 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:52.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:53 compute-1 ceph-mon[80009]: pgmap v1387: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:53.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:53 compute-1 sudo[254641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:12:53 compute-1 sudo[254641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:12:53 compute-1 sudo[254641]: pam_unix(sudo:session): session closed for user root
Nov 24 10:12:53 compute-1 sudo[254666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:12:53 compute-1 sudo[254666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:12:54 compute-1 sudo[254666]: pam_unix(sudo:session): session closed for user root
Nov 24 10:12:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:12:54 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:12:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 10:12:54 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:12:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:12:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:12:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 10:12:54 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:12:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 10:12:54 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:12:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:12:54 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:12:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:12:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:54.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:55 compute-1 ceph-mon[80009]: pgmap v1388: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:55 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:12:55 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:12:55 compute-1 ceph-mon[80009]: pgmap v1389: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:12:55 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:12:55 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:12:55 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:12:55 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:12:55 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:12:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:55.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:56 compute-1 nova_compute[230010]: 2025-11-24 10:12:56.729 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:56.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:57 compute-1 nova_compute[230010]: 2025-11-24 10:12:57.050 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:57 compute-1 sshd-session[254725]: Invalid user polkadot from 80.94.92.165 port 33168
Nov 24 10:12:57 compute-1 sshd-session[254725]: Connection closed by invalid user polkadot 80.94.92.165 port 33168 [preauth]
Nov 24 10:12:57 compute-1 ceph-mon[80009]: pgmap v1390: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:12:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:57.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:58.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:58 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:12:58 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:12:59 compute-1 sudo[254728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:12:59 compute-1 sudo[254728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:12:59 compute-1 sudo[254728]: pam_unix(sudo:session): session closed for user root
Nov 24 10:12:59 compute-1 podman[254752]: 2025-11-24 10:12:59.150372166 +0000 UTC m=+0.067369113 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 10:12:59 compute-1 ceph-mon[80009]: pgmap v1391: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:12:59 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:12:59 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:12:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:12:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:12:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:12:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:59.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:13:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:13:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:13:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:13:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:13:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:00.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:13:01 compute-1 ceph-mon[80009]: pgmap v1392: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:13:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:01.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:01 compute-1 nova_compute[230010]: 2025-11-24 10:13:01.730 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 10:13:01 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3040130363' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:13:01 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 10:13:01 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3040130363' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:13:02 compute-1 nova_compute[230010]: 2025-11-24 10:13:02.052 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/3040130363' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:13:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/3040130363' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:13:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:02.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:03 compute-1 ceph-mon[80009]: pgmap v1393: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:13:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:03.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:13:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:04.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:05 compute-1 podman[254778]: 2025-11-24 10:13:05.370545563 +0000 UTC m=+0.109703021 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 10:13:05 compute-1 ceph-mon[80009]: pgmap v1394: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:13:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:05.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:06 compute-1 nova_compute[230010]: 2025-11-24 10:13:06.734 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:13:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:06.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:13:07 compute-1 nova_compute[230010]: 2025-11-24 10:13:07.054 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:07 compute-1 ceph-mon[80009]: pgmap v1395: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:13:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:07.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:13:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:08.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:13:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:09.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:09 compute-1 ceph-mon[80009]: pgmap v1396: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:10.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:11 compute-1 sudo[254807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:13:11 compute-1 sudo[254807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:13:11 compute-1 sudo[254807]: pam_unix(sudo:session): session closed for user root
Nov 24 10:13:11 compute-1 podman[254831]: 2025-11-24 10:13:11.491134847 +0000 UTC m=+0.077589643 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 10:13:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:11.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:11 compute-1 ceph-mon[80009]: pgmap v1397: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:11 compute-1 nova_compute[230010]: 2025-11-24 10:13:11.735 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:12 compute-1 nova_compute[230010]: 2025-11-24 10:13:12.056 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:12.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:13:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:13.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:13:13 compute-1 ceph-mon[80009]: pgmap v1398: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:13:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:13:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:14.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:13:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:13:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:13:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:13:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:15.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:13:15 compute-1 ceph-mon[80009]: pgmap v1399: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:13:16 compute-1 nova_compute[230010]: 2025-11-24 10:13:16.738 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:16.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:17 compute-1 nova_compute[230010]: 2025-11-24 10:13:17.058 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:13:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:17.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:13:17 compute-1 ceph-mon[80009]: pgmap v1400: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:18.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:13:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:19.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:19 compute-1 ceph-mon[80009]: pgmap v1401: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:13:20.081 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:13:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:13:20.081 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:13:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:13:20.081 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:13:20 compute-1 ceph-mon[80009]: pgmap v1402: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:20.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:21.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:21 compute-1 nova_compute[230010]: 2025-11-24 10:13:21.741 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:22 compute-1 nova_compute[230010]: 2025-11-24 10:13:22.059 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:13:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:22.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:13:23 compute-1 ceph-mon[80009]: pgmap v1403: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:13:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:23.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:13:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:13:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:24.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:25 compute-1 ceph-mon[80009]: pgmap v1404: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:25.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:26 compute-1 nova_compute[230010]: 2025-11-24 10:13:26.743 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:26 compute-1 nova_compute[230010]: 2025-11-24 10:13:26.778 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:13:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:26.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:27 compute-1 nova_compute[230010]: 2025-11-24 10:13:27.061 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:27 compute-1 ceph-mon[80009]: pgmap v1405: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:27.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:28.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:29 compute-1 podman[254860]: 2025-11-24 10:13:29.318053279 +0000 UTC m=+0.058117177 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 10:13:29 compute-1 ceph-mon[80009]: pgmap v1406: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:13:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:29.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:29 compute-1 nova_compute[230010]: 2025-11-24 10:13:29.760 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:13:29 compute-1 nova_compute[230010]: 2025-11-24 10:13:29.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:13:29 compute-1 nova_compute[230010]: 2025-11-24 10:13:29.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:13:29 compute-1 nova_compute[230010]: 2025-11-24 10:13:29.764 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:13:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:13:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:13:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:13:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:30.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:31 compute-1 sudo[254881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:13:31 compute-1 sudo[254881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:13:31 compute-1 sudo[254881]: pam_unix(sudo:session): session closed for user root
Nov 24 10:13:31 compute-1 ceph-mon[80009]: pgmap v1407: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:31 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/783884729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:13:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:31.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:31 compute-1 nova_compute[230010]: 2025-11-24 10:13:31.743 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:32 compute-1 nova_compute[230010]: 2025-11-24 10:13:32.062 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:32 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/4283119851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:13:32 compute-1 nova_compute[230010]: 2025-11-24 10:13:32.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:13:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:13:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:32.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:13:33 compute-1 ceph-mon[80009]: pgmap v1408: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:33.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:33 compute-1 nova_compute[230010]: 2025-11-24 10:13:33.759 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:13:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:13:34 compute-1 nova_compute[230010]: 2025-11-24 10:13:34.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:13:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:13:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:34.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:13:35 compute-1 ceph-mon[80009]: pgmap v1409: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:35.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:36 compute-1 sshd-session[254908]: Invalid user minima from 164.92.213.168 port 56262
Nov 24 10:13:36 compute-1 sshd-session[254908]: Connection closed by invalid user minima 164.92.213.168 port 56262 [preauth]
Nov 24 10:13:36 compute-1 podman[254911]: 2025-11-24 10:13:36.219448346 +0000 UTC m=+0.115468192 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 24 10:13:36 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1530069259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:13:36 compute-1 nova_compute[230010]: 2025-11-24 10:13:36.745 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:13:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:36.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:13:37 compute-1 nova_compute[230010]: 2025-11-24 10:13:37.063 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:37 compute-1 ceph-mon[80009]: pgmap v1410: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:37 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/4261055826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:13:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:37.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:37 compute-1 nova_compute[230010]: 2025-11-24 10:13:37.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:13:37 compute-1 nova_compute[230010]: 2025-11-24 10:13:37.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:13:37 compute-1 nova_compute[230010]: 2025-11-24 10:13:37.784 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:13:37 compute-1 nova_compute[230010]: 2025-11-24 10:13:37.784 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:13:37 compute-1 nova_compute[230010]: 2025-11-24 10:13:37.784 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:13:37 compute-1 nova_compute[230010]: 2025-11-24 10:13:37.785 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:13:37 compute-1 nova_compute[230010]: 2025-11-24 10:13:37.785 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:13:38 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:13:38 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3050926301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:13:38 compute-1 nova_compute[230010]: 2025-11-24 10:13:38.210 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:13:38 compute-1 nova_compute[230010]: 2025-11-24 10:13:38.401 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:13:38 compute-1 nova_compute[230010]: 2025-11-24 10:13:38.403 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4878MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:13:38 compute-1 nova_compute[230010]: 2025-11-24 10:13:38.404 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:13:38 compute-1 nova_compute[230010]: 2025-11-24 10:13:38.404 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:13:38 compute-1 nova_compute[230010]: 2025-11-24 10:13:38.463 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:13:38 compute-1 nova_compute[230010]: 2025-11-24 10:13:38.464 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:13:38 compute-1 nova_compute[230010]: 2025-11-24 10:13:38.486 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:13:38 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3050926301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:13:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:38.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:38 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:13:38 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4194534558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:13:38 compute-1 nova_compute[230010]: 2025-11-24 10:13:38.977 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:13:38 compute-1 nova_compute[230010]: 2025-11-24 10:13:38.981 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:13:38 compute-1 nova_compute[230010]: 2025-11-24 10:13:38.998 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:13:38 compute-1 nova_compute[230010]: 2025-11-24 10:13:38.999 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:13:38 compute-1 nova_compute[230010]: 2025-11-24 10:13:38.999 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:13:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:13:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:13:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:39.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:13:39 compute-1 ceph-mon[80009]: pgmap v1411: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:39 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/4194534558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:13:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:13:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:40.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:13:41 compute-1 nova_compute[230010]: 2025-11-24 10:13:41.001 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:13:41 compute-1 nova_compute[230010]: 2025-11-24 10:13:41.001 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:13:41 compute-1 nova_compute[230010]: 2025-11-24 10:13:41.002 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:13:41 compute-1 nova_compute[230010]: 2025-11-24 10:13:41.017 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:13:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:41.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:41 compute-1 ceph-mon[80009]: pgmap v1412: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:41 compute-1 nova_compute[230010]: 2025-11-24 10:13:41.747 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:42 compute-1 nova_compute[230010]: 2025-11-24 10:13:42.065 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:42 compute-1 podman[254984]: 2025-11-24 10:13:42.341702221 +0000 UTC m=+0.070231353 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 24 10:13:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:13:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:42.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:13:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:43.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:43 compute-1 ceph-mon[80009]: pgmap v1413: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:13:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:13:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:44.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:13:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:13:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:13:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:45.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:45 compute-1 ceph-mon[80009]: pgmap v1414: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:45 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:13:46 compute-1 nova_compute[230010]: 2025-11-24 10:13:46.749 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:13:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:46.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:13:47 compute-1 nova_compute[230010]: 2025-11-24 10:13:47.067 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:47.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:47 compute-1 ceph-mon[80009]: pgmap v1415: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:48 compute-1 ceph-mon[80009]: pgmap v1416: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:13:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:48.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:13:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:13:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:49.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:50.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:51 compute-1 ceph-mon[80009]: pgmap v1417: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:51 compute-1 sudo[255007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:13:51 compute-1 sudo[255007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:13:51 compute-1 sudo[255007]: pam_unix(sudo:session): session closed for user root
Nov 24 10:13:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:51.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:51 compute-1 nova_compute[230010]: 2025-11-24 10:13:51.753 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:52 compute-1 nova_compute[230010]: 2025-11-24 10:13:52.069 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:52.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:53 compute-1 ceph-mon[80009]: pgmap v1418: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:13:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:53.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:13:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:13:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:54.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:55 compute-1 ceph-mon[80009]: pgmap v1419: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:55.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:56 compute-1 nova_compute[230010]: 2025-11-24 10:13:56.819 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:56.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:57 compute-1 nova_compute[230010]: 2025-11-24 10:13:57.072 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:57 compute-1 ceph-mon[80009]: pgmap v1420: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:57.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:58.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:59 compute-1 sudo[255036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:13:59 compute-1 sudo[255036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:13:59 compute-1 sudo[255036]: pam_unix(sudo:session): session closed for user root
Nov 24 10:13:59 compute-1 sudo[255061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:13:59 compute-1 sudo[255061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:13:59 compute-1 podman[255085]: 2025-11-24 10:13:59.411489078 +0000 UTC m=+0.053357579 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 10:13:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:13:59 compute-1 ceph-mon[80009]: pgmap v1421: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:13:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:59.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:59 compute-1 sudo[255061]: pam_unix(sudo:session): session closed for user root
Nov 24 10:13:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:13:59 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:13:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 10:13:59 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:13:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:13:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:13:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 10:13:59 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:13:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 10:13:59 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:13:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:13:59 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:14:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:14:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:14:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:14:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:14:00 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:14:00 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:14:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:14:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:14:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:14:00 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:14:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:14:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:00.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:14:01 compute-1 ceph-mon[80009]: pgmap v1422: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:14:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:01.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:01 compute-1 nova_compute[230010]: 2025-11-24 10:14:01.821 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:02 compute-1 nova_compute[230010]: 2025-11-24 10:14:02.074 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/1342479791' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:14:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/1342479791' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:14:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:02.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:03 compute-1 ceph-mon[80009]: pgmap v1423: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Nov 24 10:14:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:03.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:14:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:14:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #79. Immutable memtables: 0.
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.853786) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 79
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979244853817, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1085, "num_deletes": 251, "total_data_size": 2420216, "memory_usage": 2461328, "flush_reason": "Manual Compaction"}
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #80: started
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979244867041, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 80, "file_size": 1591187, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40968, "largest_seqno": 42048, "table_properties": {"data_size": 1586358, "index_size": 2353, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10689, "raw_average_key_size": 19, "raw_value_size": 1576631, "raw_average_value_size": 2925, "num_data_blocks": 103, "num_entries": 539, "num_filter_entries": 539, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763979162, "oldest_key_time": 1763979162, "file_creation_time": 1763979244, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 13337 microseconds, and 4219 cpu microseconds.
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.867123) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #80: 1591187 bytes OK
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.867147) [db/memtable_list.cc:519] [default] Level-0 commit table #80 started
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.868839) [db/memtable_list.cc:722] [default] Level-0 commit table #80: memtable #1 done
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.868855) EVENT_LOG_v1 {"time_micros": 1763979244868850, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.868878) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 2414878, prev total WAL file size 2414878, number of live WAL files 2.
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000076.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.872037) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [80(1553KB)], [78(14MB)]
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979244872145, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [80], "files_L6": [78], "score": -1, "input_data_size": 16440611, "oldest_snapshot_seqno": -1}
Nov 24 10:14:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:04.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #81: 6922 keys, 14252441 bytes, temperature: kUnknown
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979244938985, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 81, "file_size": 14252441, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14207712, "index_size": 26313, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 182554, "raw_average_key_size": 26, "raw_value_size": 14084537, "raw_average_value_size": 2034, "num_data_blocks": 1029, "num_entries": 6922, "num_filter_entries": 6922, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763979244, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 81, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.939484) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 14252441 bytes
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.941570) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 245.4 rd, 212.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 14.2 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(19.3) write-amplify(9.0) OK, records in: 7438, records dropped: 516 output_compression: NoCompression
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.941601) EVENT_LOG_v1 {"time_micros": 1763979244941586, "job": 48, "event": "compaction_finished", "compaction_time_micros": 66992, "compaction_time_cpu_micros": 30754, "output_level": 6, "num_output_files": 1, "total_output_size": 14252441, "num_input_records": 7438, "num_output_records": 6922, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979244942636, "job": 48, "event": "table_file_deletion", "file_number": 80}
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000078.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979244948316, "job": 48, "event": "table_file_deletion", "file_number": 78}
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.871900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.948531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.948537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.948539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.948541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:14:04 compute-1 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.948542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:14:05 compute-1 sudo[255140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:14:05 compute-1 sudo[255140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:14:05 compute-1 sudo[255140]: pam_unix(sudo:session): session closed for user root
Nov 24 10:14:05 compute-1 ceph-mon[80009]: pgmap v1424: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:14:05 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:14:05 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:14:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:05.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:06 compute-1 podman[255166]: 2025-11-24 10:14:06.387268409 +0000 UTC m=+0.115139264 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 24 10:14:06 compute-1 nova_compute[230010]: 2025-11-24 10:14:06.825 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:06.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:07 compute-1 nova_compute[230010]: 2025-11-24 10:14:07.076 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:07 compute-1 ceph-mon[80009]: pgmap v1425: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:14:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:07.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:08.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:14:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:09.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:09 compute-1 ceph-mon[80009]: pgmap v1426: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Nov 24 10:14:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:10.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:11 compute-1 sudo[255195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:14:11 compute-1 sudo[255195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:14:11 compute-1 sudo[255195]: pam_unix(sudo:session): session closed for user root
Nov 24 10:14:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:11.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:11 compute-1 ceph-mon[80009]: pgmap v1427: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:14:11 compute-1 nova_compute[230010]: 2025-11-24 10:14:11.827 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:12 compute-1 nova_compute[230010]: 2025-11-24 10:14:12.078 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:12 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:12 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:12.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:13 compute-1 podman[255221]: 2025-11-24 10:14:13.370065973 +0000 UTC m=+0.097563543 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 24 10:14:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:13.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:13 compute-1 ceph-mon[80009]: pgmap v1428: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:14:14 compute-1 ceph-mon[80009]: pgmap v1429: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:14 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:14 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:14 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:14.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:14:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:14:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:15.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:15 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:14:16 compute-1 ceph-mon[80009]: pgmap v1430: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:16 compute-1 nova_compute[230010]: 2025-11-24 10:14:16.829 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:16 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:16 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:16 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:16.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:17 compute-1 nova_compute[230010]: 2025-11-24 10:14:17.079 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:17.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:18 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:18 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:18 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:18.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:19 compute-1 ceph-mon[80009]: pgmap v1431: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:14:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:19.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:14:20.082 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:14:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:14:20.083 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:14:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:14:20.083 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:14:20 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:20 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:20 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:20.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:21 compute-1 ceph-mon[80009]: pgmap v1432: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:21.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:21 compute-1 nova_compute[230010]: 2025-11-24 10:14:21.831 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:22 compute-1 nova_compute[230010]: 2025-11-24 10:14:22.080 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:22 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:22 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:22 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:22.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:23 compute-1 ceph-mon[80009]: pgmap v1433: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:23.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:24 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:14:24 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:24 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:24 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:24.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:25 compute-1 ceph-mon[80009]: pgmap v1434: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:25.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:26 compute-1 nova_compute[230010]: 2025-11-24 10:14:26.831 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:26 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:26 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:26 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:26.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:27 compute-1 ceph-mon[80009]: pgmap v1435: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:27 compute-1 nova_compute[230010]: 2025-11-24 10:14:27.082 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:14:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:27.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:14:28 compute-1 nova_compute[230010]: 2025-11-24 10:14:28.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:14:28 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:28 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.003000071s ======
Nov 24 10:14:28 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:28.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000071s
Nov 24 10:14:29 compute-1 ceph-mon[80009]: pgmap v1436: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:14:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:29.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:29 compute-1 nova_compute[230010]: 2025-11-24 10:14:29.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:14:30 compute-1 podman[255250]: 2025-11-24 10:14:30.32560661 +0000 UTC m=+0.065002164 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 24 10:14:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:14:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:14:30 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:30 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:30 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:30.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:31 compute-1 ceph-mon[80009]: pgmap v1437: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:31 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:14:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:31.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:31 compute-1 nova_compute[230010]: 2025-11-24 10:14:31.759 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:14:31 compute-1 nova_compute[230010]: 2025-11-24 10:14:31.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:14:31 compute-1 nova_compute[230010]: 2025-11-24 10:14:31.764 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:14:31 compute-1 sudo[255270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:14:31 compute-1 sudo[255270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:14:31 compute-1 sudo[255270]: pam_unix(sudo:session): session closed for user root
Nov 24 10:14:31 compute-1 nova_compute[230010]: 2025-11-24 10:14:31.833 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:32 compute-1 nova_compute[230010]: 2025-11-24 10:14:32.084 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:32 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/552573425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:14:32 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:32 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:32 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:32.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:33 compute-1 ceph-mon[80009]: pgmap v1438: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:33 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1896160492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:14:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:33.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:33 compute-1 nova_compute[230010]: 2025-11-24 10:14:33.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:14:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:14:34 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:34 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:34 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:34.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:35 compute-1 ceph-mon[80009]: pgmap v1439: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:35.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:35 compute-1 nova_compute[230010]: 2025-11-24 10:14:35.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:14:36 compute-1 nova_compute[230010]: 2025-11-24 10:14:36.835 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:36 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:36 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:36 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:36.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:37 compute-1 nova_compute[230010]: 2025-11-24 10:14:37.086 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:37 compute-1 ceph-mon[80009]: pgmap v1440: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:37 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2311438900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:14:37 compute-1 podman[255298]: 2025-11-24 10:14:37.359423705 +0000 UTC m=+0.094363385 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 10:14:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:37.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:38 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1121257676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:14:38 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:38 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:38 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:38.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:39 compute-1 ceph-mon[80009]: pgmap v1441: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:14:39 compute-1 nova_compute[230010]: 2025-11-24 10:14:39.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:14:39 compute-1 nova_compute[230010]: 2025-11-24 10:14:39.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:14:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:39.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:39 compute-1 nova_compute[230010]: 2025-11-24 10:14:39.784 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:14:39 compute-1 nova_compute[230010]: 2025-11-24 10:14:39.784 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:14:39 compute-1 nova_compute[230010]: 2025-11-24 10:14:39.785 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:14:39 compute-1 nova_compute[230010]: 2025-11-24 10:14:39.785 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:14:39 compute-1 nova_compute[230010]: 2025-11-24 10:14:39.785 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:14:40 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:14:40 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1493371527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:14:40 compute-1 nova_compute[230010]: 2025-11-24 10:14:40.263 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:14:40 compute-1 nova_compute[230010]: 2025-11-24 10:14:40.420 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:14:40 compute-1 nova_compute[230010]: 2025-11-24 10:14:40.421 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4894MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:14:40 compute-1 nova_compute[230010]: 2025-11-24 10:14:40.422 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:14:40 compute-1 nova_compute[230010]: 2025-11-24 10:14:40.422 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:14:40 compute-1 nova_compute[230010]: 2025-11-24 10:14:40.566 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:14:40 compute-1 nova_compute[230010]: 2025-11-24 10:14:40.566 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:14:40 compute-1 nova_compute[230010]: 2025-11-24 10:14:40.756 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing inventories for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 10:14:40 compute-1 nova_compute[230010]: 2025-11-24 10:14:40.771 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating ProviderTree inventory for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 10:14:40 compute-1 nova_compute[230010]: 2025-11-24 10:14:40.772 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating inventory in ProviderTree for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 10:14:40 compute-1 nova_compute[230010]: 2025-11-24 10:14:40.785 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing aggregate associations for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 10:14:40 compute-1 nova_compute[230010]: 2025-11-24 10:14:40.808 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing trait associations for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e, traits: COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_F16C,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI2,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 10:14:40 compute-1 nova_compute[230010]: 2025-11-24 10:14:40.824 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:14:40 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:40 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:14:40 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:40.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:14:41 compute-1 ceph-mon[80009]: pgmap v1442: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:41 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1493371527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:14:41 compute-1 nova_compute[230010]: 2025-11-24 10:14:41.300 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:14:41 compute-1 nova_compute[230010]: 2025-11-24 10:14:41.306 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:14:41 compute-1 nova_compute[230010]: 2025-11-24 10:14:41.319 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:14:41 compute-1 nova_compute[230010]: 2025-11-24 10:14:41.321 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:14:41 compute-1 nova_compute[230010]: 2025-11-24 10:14:41.321 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:14:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:41.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:41 compute-1 nova_compute[230010]: 2025-11-24 10:14:41.889 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:42 compute-1 nova_compute[230010]: 2025-11-24 10:14:42.088 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:42 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2401290326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:14:42 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:42 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:42 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:42.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:43 compute-1 ceph-mon[80009]: pgmap v1443: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:43 compute-1 nova_compute[230010]: 2025-11-24 10:14:43.322 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:14:43 compute-1 nova_compute[230010]: 2025-11-24 10:14:43.322 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:14:43 compute-1 nova_compute[230010]: 2025-11-24 10:14:43.322 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:14:43 compute-1 nova_compute[230010]: 2025-11-24 10:14:43.338 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:14:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:43.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:44 compute-1 podman[255372]: 2025-11-24 10:14:44.362465815 +0000 UTC m=+0.081696354 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 10:14:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:14:44 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:44 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:14:44 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:44.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:14:45 compute-1 ceph-mon[80009]: pgmap v1444: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:14:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:14:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:45.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:46 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:14:46 compute-1 nova_compute[230010]: 2025-11-24 10:14:46.892 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:46 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:46 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:46 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:46.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:47 compute-1 nova_compute[230010]: 2025-11-24 10:14:47.090 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:47 compute-1 ceph-mon[80009]: pgmap v1445: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:47.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:48 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:48 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:48 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:48.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:49 compute-1 ceph-mon[80009]: pgmap v1446: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:49 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:14:49 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:49 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:49 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:49.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:50 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:50 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:50 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:50.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:51 compute-1 ceph-mon[80009]: pgmap v1447: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:51 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:51 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:51 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:51.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:51 compute-1 sudo[255393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:14:51 compute-1 sudo[255393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:14:51 compute-1 sudo[255393]: pam_unix(sudo:session): session closed for user root
Nov 24 10:14:51 compute-1 nova_compute[230010]: 2025-11-24 10:14:51.894 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:52 compute-1 nova_compute[230010]: 2025-11-24 10:14:52.092 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:52 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:52 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:52 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:52.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:53 compute-1 ceph-mon[80009]: pgmap v1448: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:53 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:53 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:53 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:53.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:54 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:14:54 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:54 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:54 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:54.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:55 compute-1 ceph-mon[80009]: pgmap v1449: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:55 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:55 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:55 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:55.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:56 compute-1 nova_compute[230010]: 2025-11-24 10:14:56.929 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:56 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:56 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:56 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:56.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:57 compute-1 nova_compute[230010]: 2025-11-24 10:14:57.095 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:57 compute-1 ceph-mon[80009]: pgmap v1450: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:57 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:57 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:57 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:57.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:58 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:58 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:58 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:58.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:59 compute-1 ceph-mon[80009]: pgmap v1451: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:59 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:14:59 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:14:59 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:59 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:59.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:00 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:15:00 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:15:00 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:00 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:00 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:00.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:01 compute-1 podman[255423]: 2025-11-24 10:15:01.323450736 +0000 UTC m=+0.057084611 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 10:15:01 compute-1 ceph-mon[80009]: pgmap v1452: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:01 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:15:01 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:01 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:01 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:01.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:01 compute-1 nova_compute[230010]: 2025-11-24 10:15:01.995 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:02 compute-1 nova_compute[230010]: 2025-11-24 10:15:02.096 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/2187607288' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:15:02 compute-1 ceph-mon[80009]: from='client.? 192.168.122.10:0/2187607288' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:15:02 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:02 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:02 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:02.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:03 compute-1 ceph-mon[80009]: pgmap v1453: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:15:03 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:03 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:15:03 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:03.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:15:04 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:15:04 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:04 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:04 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:04.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:05 compute-1 sudo[255445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:15:05 compute-1 sudo[255445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:15:05 compute-1 sudo[255445]: pam_unix(sudo:session): session closed for user root
Nov 24 10:15:05 compute-1 sudo[255470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:15:05 compute-1 sudo[255470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:15:05 compute-1 ceph-mon[80009]: pgmap v1454: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:05 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:05 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:05 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:05.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:05 compute-1 sudo[255470]: pam_unix(sudo:session): session closed for user root
Nov 24 10:15:06 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:06 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:06 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:06.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:07 compute-1 nova_compute[230010]: 2025-11-24 10:15:07.098 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:07 compute-1 nova_compute[230010]: 2025-11-24 10:15:07.601 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:07 compute-1 ceph-mon[80009]: pgmap v1455: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:07 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:07 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:07 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:07.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:08 compute-1 podman[255529]: 2025-11-24 10:15:08.370265772 +0000 UTC m=+0.108340457 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 24 10:15:08 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:15:08 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:08 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:08 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:08.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:08 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:15:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:15:09 compute-1 ceph-mon[80009]: pgmap v1456: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:15:09 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:15:09 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:15:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:15:09 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:15:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 10:15:09 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:15:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:15:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:15:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 10:15:09 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:15:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 10:15:09 compute-1 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:15:09 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:15:09 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:15:09 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:09 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:15:09 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:09.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:15:10 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:15:10 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:15:10 compute-1 ceph-mon[80009]: pgmap v1457: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:15:10 compute-1 ceph-mon[80009]: pgmap v1458: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:15:10 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:15:10 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:15:10 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:15:10 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:15:10 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:15:10 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:10 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:15:10 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:10.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:15:11 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:11 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:11 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:11.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:11 compute-1 ceph-mon[80009]: pgmap v1459: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:15:11 compute-1 sudo[255559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:15:11 compute-1 sudo[255559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:15:11 compute-1 sudo[255559]: pam_unix(sudo:session): session closed for user root
Nov 24 10:15:12 compute-1 nova_compute[230010]: 2025-11-24 10:15:12.103 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:12 compute-1 sshd-session[255557]: Invalid user solana from 80.94.92.165 port 35866
Nov 24 10:15:12 compute-1 sshd-session[255557]: Connection closed by invalid user solana 80.94.92.165 port 35866 [preauth]
Nov 24 10:15:12 compute-1 nova_compute[230010]: 2025-11-24 10:15:12.603 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:12 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:12.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:13 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:13 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:13 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:13.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:13 compute-1 ceph-mon[80009]: pgmap v1460: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:15:14 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:15:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:15.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:15:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:15:15 compute-1 sudo[255587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:15:15 compute-1 sudo[255587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:15:15 compute-1 sudo[255587]: pam_unix(sudo:session): session closed for user root
Nov 24 10:15:15 compute-1 podman[255586]: 2025-11-24 10:15:15.337858923 +0000 UTC m=+0.069901894 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 24 10:15:15 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:15:15 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:15:15 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:15 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:15 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:15.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:16 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:15:16 compute-1 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:15:16 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:15:16 compute-1 ceph-mon[80009]: pgmap v1461: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:15:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:17.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:17 compute-1 nova_compute[230010]: 2025-11-24 10:15:17.108 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:17 compute-1 nova_compute[230010]: 2025-11-24 10:15:17.605 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:17 compute-1 sshd-session[255633]: Accepted publickey for zuul from 192.168.122.10 port 39564 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 10:15:17 compute-1 systemd-logind[823]: New session 58 of user zuul.
Nov 24 10:15:17 compute-1 systemd[1]: Started Session 58 of User zuul.
Nov 24 10:15:17 compute-1 sshd-session[255633]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 10:15:17 compute-1 sudo[255637]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 24 10:15:17 compute-1 sudo[255637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 10:15:17 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:17 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:17 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:17.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:17 compute-1 ceph-mon[80009]: pgmap v1462: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:15:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:19.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:19 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:15:19 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:19 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:19 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:19.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:19 compute-1 ceph-mon[80009]: pgmap v1463: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:15:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:15:20.083 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:15:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:15:20.085 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:15:20 compute-1 ovn_metadata_agent[142331]: 2025-11-24 10:15:20.085 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:15:20 compute-1 ceph-mon[80009]: from='client.28031 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:20 compute-1 ceph-mon[80009]: from='client.26536 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:20 compute-1 ceph-mon[80009]: from='client.18744 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:20 compute-1 ceph-mon[80009]: from='client.28043 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:21.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:21 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 24 10:15:21 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3403419153' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 10:15:21 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 24 10:15:21 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3055159028' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 10:15:21 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:21 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:21 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:21.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:21 compute-1 ceph-mon[80009]: from='client.26548 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:21 compute-1 ceph-mon[80009]: from='client.18756 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:21 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3403419153' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 10:15:21 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/382031394' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 10:15:21 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3055159028' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 10:15:21 compute-1 ceph-mon[80009]: pgmap v1464: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:15:22 compute-1 nova_compute[230010]: 2025-11-24 10:15:22.110 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:22 compute-1 nova_compute[230010]: 2025-11-24 10:15:22.606 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:23.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:23 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:23 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:15:23 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:23.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:15:24 compute-1 ovs-vsctl[255970]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 24 10:15:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:15:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:25.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:15:25 compute-1 virtqemud[229578]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 24 10:15:25 compute-1 virtqemud[229578]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 24 10:15:25 compute-1 virtqemud[229578]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 24 10:15:25 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:25 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:15:25 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:25.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:15:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:15:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:27.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:15:27 compute-1 nova_compute[230010]: 2025-11-24 10:15:27.112 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:27 compute-1 ceph-mon[80009]: pgmap v1465: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:27 compute-1 nova_compute[230010]: 2025-11-24 10:15:27.608 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:27 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:15:27 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: cache status {prefix=cache status} (starting...)
Nov 24 10:15:27 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 10:15:27 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: client ls {prefix=client ls} (starting...)
Nov 24 10:15:27 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 10:15:27 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:27 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:27 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:27.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:27 compute-1 ceph-mon[80009]: pgmap v1466: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:27 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1857747192' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:15:27 compute-1 ceph-mon[80009]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:15:27 compute-1 lvm[256365]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 10:15:27 compute-1 lvm[256365]: VG ceph_vg0 finished
Nov 24 10:15:28 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: damage ls {prefix=damage ls} (starting...)
Nov 24 10:15:28 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 10:15:28 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: dump loads {prefix=dump loads} (starting...)
Nov 24 10:15:28 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 10:15:28 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 24 10:15:28 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 10:15:28 compute-1 nova_compute[230010]: 2025-11-24 10:15:28.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:15:28 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 24 10:15:28 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 10:15:28 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 24 10:15:28 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1465121536' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:15:28 compute-1 ceph-mon[80009]: from='client.26563 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:28 compute-1 ceph-mon[80009]: from='client.26575 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:28 compute-1 ceph-mon[80009]: pgmap v1467: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:15:28 compute-1 ceph-mon[80009]: from='client.26581 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:28 compute-1 ceph-mon[80009]: from='client.18774 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:28 compute-1 ceph-mon[80009]: from='client.28070 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:28 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1973484189' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:15:28 compute-1 ceph-mon[80009]: from='client.26599 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:28 compute-1 ceph-mon[80009]: from='client.18783 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:28 compute-1 ceph-mon[80009]: from='client.28079 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:28 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/576920121' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 10:15:28 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2643511279' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 10:15:28 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1164800559' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:15:28 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1465121536' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:15:28 compute-1 ceph-mon[80009]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:15:28 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 24 10:15:28 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 10:15:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:15:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:29.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:15:29 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 24 10:15:29 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 10:15:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:15:29 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3317630471' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:15:29 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 24 10:15:29 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 10:15:29 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 24 10:15:29 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 10:15:29 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: ops {prefix=ops} (starting...)
Nov 24 10:15:29 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 10:15:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Nov 24 10:15:29 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2434305700' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 10:15:29 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Nov 24 10:15:29 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2452454843' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 10:15:29 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:29 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:15:29 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:29.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:15:29 compute-1 ceph-mon[80009]: from='client.18813 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:29 compute-1 ceph-mon[80009]: from='client.28100 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:29 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2880740983' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 10:15:29 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1884848928' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:15:29 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2551530414' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:15:29 compute-1 ceph-mon[80009]: from='client.18828 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:29 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3317630471' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:15:29 compute-1 ceph-mon[80009]: from='client.28121 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:29 compute-1 ceph-mon[80009]: from='client.26629 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:29 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/4054669363' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 10:15:29 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/4254001673' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:15:29 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1760821900' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 10:15:29 compute-1 ceph-mon[80009]: from='client.26641 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:29 compute-1 ceph-mon[80009]: pgmap v1468: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:29 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2434305700' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 10:15:29 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2452454843' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 10:15:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Nov 24 10:15:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1049383097' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 10:15:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 24 10:15:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2514280768' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:15:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:15:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:15:30 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: session ls {prefix=session ls} (starting...)
Nov 24 10:15:30 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 10:15:30 compute-1 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: status {prefix=status} (starting...)
Nov 24 10:15:30 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 24 10:15:30 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3079237173' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:15:30 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2421036100' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 10:15:30 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2676757163' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:15:30 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1934793391' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:15:30 compute-1 ceph-mon[80009]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:15:30 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1888070657' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:15:30 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1049383097' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 10:15:30 compute-1 ceph-mon[80009]: from='client.18864 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:30 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2514280768' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:15:30 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/368086163' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:15:30 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:15:30 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/609377997' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:15:30 compute-1 ceph-mon[80009]: from='client.28166 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:30 compute-1 ceph-mon[80009]: from='client.18885 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:30 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/566080117' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 10:15:30 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3079237173' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:15:30 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1028027921' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 10:15:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:15:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:31.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:15:31 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 24 10:15:31 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3687570939' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:15:31 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 24 10:15:31 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3221653421' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:15:31 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 24 10:15:31 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2214528181' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:15:31 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 24 10:15:31 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/549447527' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:15:31 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 24 10:15:31 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2964773270' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:15:31 compute-1 nova_compute[230010]: 2025-11-24 10:15:31.759 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:15:31 compute-1 nova_compute[230010]: 2025-11-24 10:15:31.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:15:31 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:31 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:31 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:31.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:31 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3687570939' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:15:31 compute-1 ceph-mon[80009]: from='client.26695 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:31 compute-1 ceph-mon[80009]: from='client.28187 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:31 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1808703927' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:15:31 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3221653421' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:15:31 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3817870708' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:15:31 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2915968578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:15:31 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1357936568' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 10:15:31 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2214528181' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:15:31 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/549447527' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:15:31 compute-1 ceph-mon[80009]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:15:31 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3904833836' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 10:15:31 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2964773270' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:15:31 compute-1 ceph-mon[80009]: from='client.26725 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:31 compute-1 ceph-mon[80009]: pgmap v1469: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:15:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Nov 24 10:15:32 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/315873122' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 10:15:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Nov 24 10:15:32 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/173568047' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 10:15:32 compute-1 nova_compute[230010]: 2025-11-24 10:15:32.115 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:32 compute-1 sudo[257001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:15:32 compute-1 sudo[257001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:15:32 compute-1 sudo[257001]: pam_unix(sudo:session): session closed for user root
Nov 24 10:15:32 compute-1 podman[257042]: 2025-11-24 10:15:32.213676694 +0000 UTC m=+0.076172438 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 10:15:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 24 10:15:32 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3371992669' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:15:32 compute-1 nova_compute[230010]: 2025-11-24 10:15:32.610 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:15:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Nov 24 10:15:32 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4241696948' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 10:15:32 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 24 10:15:32 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4157263929' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:15:33 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/349487416' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 10:15:33 compute-1 ceph-mon[80009]: from='client.18951 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:33 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/315873122' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 10:15:33 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3861438894' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 10:15:33 compute-1 ceph-mon[80009]: from='client.26743 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:33 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3496901554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:15:33 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/173568047' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 10:15:33 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1834471110' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:15:33 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1389903690' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 10:15:33 compute-1 ceph-mon[80009]: from='client.28259 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:33 compute-1 ceph-mon[80009]: from='client.26758 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:33 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/631828033' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:15:33 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3371992669' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:15:33 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/4241696948' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 10:15:33 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3912695799' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 10:15:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:15:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:33.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:15:33 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Nov 24 10:15:33 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3889485254' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 10:15:33 compute-1 nova_compute[230010]: 2025-11-24 10:15:33.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:15:33 compute-1 nova_compute[230010]: 2025-11-24 10:15:33.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:15:33 compute-1 nova_compute[230010]: 2025-11-24 10:15:33.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:15:33 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:33 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:33 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:33.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:33 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 24 10:15:33 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4268995238' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:15:34 compute-1 ceph-mon[80009]: from='client.18990 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:34 compute-1 ceph-mon[80009]: from='client.26773 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:34 compute-1 ceph-mon[80009]: from='client.28289 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:34 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/4157263929' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:15:34 compute-1 ceph-mon[80009]: from='client.19014 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:34 compute-1 ceph-mon[80009]: from='client.26788 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:34 compute-1 ceph-mon[80009]: from='client.28298 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:34 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1000993820' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:15:34 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/934442744' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:15:34 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3889485254' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 10:15:34 compute-1 ceph-mon[80009]: from='client.19035 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:34 compute-1 ceph-mon[80009]: from='client.26809 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:34 compute-1 ceph-mon[80009]: pgmap v1470: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:34 compute-1 ceph-mon[80009]: from='client.28319 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:34 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1674425236' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:15:34 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2610082563' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:15:34 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/4268995238' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:42.729468+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:43.729604+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:44.729792+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:45.729933+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:46.730050+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:47.730218+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:48.730355+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:49.730482+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:50.730647+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:51.730767+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:52.730925+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:53.731049+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:54.731205+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:55.731367+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:56.731523+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:57.731679+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:58.731841+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:59.731980+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be107400 session 0x5634bef0c5a0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bd23a800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:00.732161+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:01.732333+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:02.732455+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:03.732642+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bee53000 session 0x5634bf225e00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:04.732802+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:05.732986+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:06.733125+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:07.733333+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:08.733473+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:09.733602+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:10.733889+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:11.734110+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:12.734244+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:13.734486+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:14.734704+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:15.734947+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:16.735156+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 409600 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be106400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 56.697479248s of 56.708225250s, submitted: 3
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:17.735498+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 409600 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:18.735641+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 409600 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:19.735796+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 409600 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be106800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:20.735965+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 393216 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:21.736106+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 393216 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960907 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:22.736267+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 393216 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:23.736564+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 393216 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:24.736773+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:25.736911+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:26.737092+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:27.737296+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:28.737447+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:29.737629+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:30.737785+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:31.737909+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:32.738057+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:33.738195+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:34.738335+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:35.738540+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:36.738697+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:37.739533+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:38.739665+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:39.740085+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:40.742047+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:41.742262+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:42.744551+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:43.746073+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:44.746285+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:45.746458+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be106800 session 0x5634bfe22d20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:46.746592+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:47.746779+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:48.746908+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:49.747125+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:50.747474+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:51.747632+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:52.747818+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 360448 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:53.748225+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 360448 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:54.748416+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 360448 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:55.748596+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 360448 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:56.748751+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:57.749038+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:58.749442+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:59.749596+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:00.749760+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:01.749876+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be107400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:02.750109+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:03.750309+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:04.750449+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 48.040000916s of 48.052692413s, submitted: 3
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:05.750594+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:06.750741+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959725 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:07.750947+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:08.751139+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:09.751242+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:10.751380+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:11.751517+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:12.751638+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:13.751784+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:14.751969+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:15.752189+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:16.752366+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:17.752639+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be106400 session 0x5634bef0cb40
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:18.752818+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:19.752986+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:20.753122+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:21.753292+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:22.753526+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:23.753713+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:24.753899+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:25.754016+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:26.754177+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:27.754359+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:28.754561+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:29.754720+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:30.754853+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:31.754957+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:32.755081+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:33.755217+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:34.755350+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:35.755500+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 294912 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:36.755667+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 294912 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:37.755898+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 294912 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:38.756025+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 294912 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:39.756177+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 294912 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:40.756327+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 294912 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:41.756468+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 286720 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:42.756608+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 286720 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:43.756741+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 286720 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:44.756862+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:45.757025+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:46.757234+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:47.757451+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:48.757646+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:49.757799+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:50.757911+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:51.758023+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:52.758141+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:53.758267+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:54.758390+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:55.758582+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:56.758705+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:57.758898+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:58.759024+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:59.759173+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:00.759313+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:01.759450+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 56.977294922s of 56.983356476s, submitted: 2
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:02.759580+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:03.759708+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:04.759899+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:05.760067+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:06.760267+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:07.760471+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:08.760605+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:09.760780+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:10.760944+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:11.761058+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:12.761205+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:13.761352+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:14.761472+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:15.761619+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:16.761760+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:17.761908+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:18.762243+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:19.762413+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:20.762533+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:21.762671+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:22.762829+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:23.762979+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:24.763128+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:25.763246+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be107400 session 0x5634c029ba40
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:26.763357+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:27.763524+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:28.763707+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:29.763864+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:30.784740+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:31.784855+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:32.784985+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:33.785118+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:34.785235+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:35.785336+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:36.785482+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 229376 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:37.785712+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 229376 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:38.785869+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 229376 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf0fc000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 36.226074219s of 36.229869843s, submitted: 1
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:39.786055+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 212992 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:40.786199+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 212992 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:41.786460+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 212992 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf0fc400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:42.786607+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf727000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963670 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:43.786718+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:44.786836+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:45.787016+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bf0fc000 session 0x5634bd20be00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:46.787157+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:47.787299+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963670 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:48.787439+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.313743591s of 10.320786476s, submitted: 2
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:49.787562+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:50.787679+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:51.787841+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:52.787976+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963079 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:53.788102+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:54.788237+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:55.788376+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bf727000 session 0x5634c0304f00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:56.788547+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:57.788700+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963079 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:58.788811+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:59.788953+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.823747635s of 10.826331139s, submitted: 1
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:00.789060+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:01.789172+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:02.789273+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be106400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964591 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 6909 writes, 27K keys, 6909 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6909 writes, 1355 syncs, 5.10 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 485 writes, 766 keys, 485 commit groups, 1.0 writes per commit group, ingest: 0.25 MB, 0.00 MB/s
                                           Interval WAL: 485 writes, 231 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:03.789393+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:04.789520+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:05.789646+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:06.789807+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:07.789936+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964591 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:08.790055+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:09.790148+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.365579605s of 10.368579865s, submitted: 1
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:10.790262+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:11.790426+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:12.790574+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964000 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:13.790686+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be106800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:14.790792+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:15.790898+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bf0fc400 session 0x5634bd030780
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:16.791015+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:17.791170+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964921 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:18.791332+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:19.791694+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:20.791815+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:21.791996+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:22.792142+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964921 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:23.792345+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:24.792715+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:25.793621+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:26.793770+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:27.794103+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964921 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:28.794621+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:29.795583+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:30.795712+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:31.795860+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:32.795977+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964921 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:33.796106+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.830619812s of 23.935253143s, submitted: 3
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:34.796439+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:35.796557+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:36.796714+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:37.796905+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966433 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:38.797037+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:39.797218+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:40.797550+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:41.797753+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:42.797916+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:43.798116+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:44.798326+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:45.798508+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:46.798692+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:47.798923+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:48.799087+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:49.799461+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:50.799623+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:51.799743+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:52.799891+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:53.800063+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:54.800303+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:55.800450+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:56.800599+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:57.800758+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:58.800955+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:59.801076+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:00.801237+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:01.801330+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:02.801454+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:03.801580+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:04.801712+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:05.801809+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread fragmentation_score=0.000026 took=0.000038s
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:06.801945+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:07.802092+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:08.802252+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:09.802385+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:10.802512+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:11.802665+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:12.802820+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:13.802950+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:14.803073+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:15.803225+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:16.803393+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 73728 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:17.803585+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 73728 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:18.803729+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 73728 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:19.803896+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 73728 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:20.804026+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 73728 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 47.325916290s of 47.341407776s, submitted: 2
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:21.804150+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 49152 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:22.804313+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [0,0,1])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 1843200 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:23.804551+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1671168 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:24.825851+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1671168 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:25.826918+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:26.827865+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:27.828557+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:28.829220+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:29.829595+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:30.829826+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:31.830003+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:32.830193+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:33.830511+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1654784 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:34.830705+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1654784 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:35.831384+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1654784 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:36.831928+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:37.832273+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:38.832522+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:39.832991+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:40.833470+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:41.833833+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:42.834108+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:43.834251+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1638400 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:44.834525+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1638400 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:45.834686+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1638400 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:46.834984+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1638400 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:47.835247+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1638400 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:48.835423+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1638400 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:49.835646+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 1630208 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:50.835851+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 1630208 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:51.836043+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 1630208 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:52.836225+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1622016 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:53.836386+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1622016 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:54.836556+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.360378265s of 33.329280853s, submitted: 257
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1622016 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:55.836973+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1613824 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:56.837435+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1613824 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:57.837820+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1613824 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965251 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:58.838172+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1613824 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:59.838502+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1613824 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:00.838785+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1613824 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:01.839051+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:02.839333+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965251 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:03.839572+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:04.839736+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:05.839984+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:06.840214+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:07.840470+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965251 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be106800 session 0x5634c0305c20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:08.840669+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:09.840822+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:10.841230+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:11.841421+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:12.841599+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965251 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:13.841834+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:14.841998+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:15.842214+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:16.842447+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:17.842632+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965251 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:18.842786+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:19.842941+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:20.843129+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1581056 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:21.843281+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1581056 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:22.843426+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1581056 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965251 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:23.843631+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1581056 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:24.843844+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1581056 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be107400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.545049667s of 30.567733765s, submitted: 1
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:25.844061+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:26.844281+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:27.844396+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968275 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:28.844568+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:29.844729+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:30.844858+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:31.844999+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:32.845147+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969787 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:33.845289+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:34.845468+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:35.845614+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:36.845752+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.074189186s of 12.084068298s, submitted: 3
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:37.845919+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969196 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:38.846046+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 1556480 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:39.846151+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 1556480 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:40.846283+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1540096 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:41.846436+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1540096 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:42.846556+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1540096 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969196 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:43.846686+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1540096 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:44.846834+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1540096 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:45.847005+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 1531904 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:46.847142+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 1531904 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:47.847501+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 1531904 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969196 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:48.847897+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 1523712 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:49.848133+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:50.848295+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:51.848477+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:52.848604+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969196 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:53.848758+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:54.848928+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:55.849142+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:56.849305+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:57.849471+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969196 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:58.849606+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:59.849971+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:00.850604+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:01.851208+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:02.851650+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969196 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:03.852087+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.175689697s of 27.178615570s, submitted: 1
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:04.852455+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:05.852748+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:06.852993+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:07.853300+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:08.853513+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970708 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:09.853726+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:10.853866+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:11.854048+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:12.854223+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:13.854382+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969526 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:14.854565+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:15.854704+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:16.854930+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:17.855195+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be106400 session 0x5634c029b0e0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:18.855384+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969526 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:19.855883+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:20.856304+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:21.856662+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:22.856878+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:23.857053+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969526 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:24.857337+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:25.857661+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:26.857913+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:27.858166+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969526 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:29.570987+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:30.571220+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:31.571502+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:32.572674+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:33.572833+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969526 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:34.572952+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bee53000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.582208633s of 30.591884613s, submitted: 3
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:35.573085+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:36.573253+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:37.573363+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:38.573563+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971038 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:39.573693+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:40.573814+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:41.573959+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:42.574079+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:43.574232+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:44.574350+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:45.574442+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:46.574575+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:47.574864+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:48.575099+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:49.575250+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:50.575553+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:51.575734+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:52.575904+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:53.576039+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:54.576154+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:55.576332+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:56.576507+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:57.576626+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:58.576802+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:59.576933+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:00.577125+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:01.577251+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:02.577473+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:03.577631+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:04.577770+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:05.577916+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:06.578107+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:07.578237+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:08.578369+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:09.578498+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:10.578633+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:11.578759+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:12.578936+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:13.579117+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:14.579262+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.724964142s of 39.735435486s, submitted: 2
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:15.579390+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:16.579534+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:17.579651+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:18.579869+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:19.580061+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:20.580204+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:21.580388+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1458176 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:22.580555+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1458176 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:23.580702+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1458176 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:24.580824+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1458176 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:25.580944+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:26.581099+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:27.581229+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:28.581461+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:29.581583+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:30.581700+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:31.581888+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:32.582053+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:33.582207+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:34.582366+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:35.582507+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:36.582658+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:37.582796+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:38.582965+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:39.583084+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:40.583243+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:41.583502+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:42.583642+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:43.583836+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:44.583959+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:45.584117+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:46.584244+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:47.584391+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:48.584791+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:49.584913+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:50.585087+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:51.585233+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:52.585478+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:53.585650+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:54.585790+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:55.585951+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:56.586142+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:57.586301+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:58.586474+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:59.586591+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:00.586719+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:01.586960+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:02.587131+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:03.587287+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:04.587485+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:05.587705+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:06.587830+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:07.587984+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:08.588165+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:09.588448+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:10.589303+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:11.589818+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:12.591571+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:13.591843+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:14.592494+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:15.592636+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:16.593130+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:17.593318+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:18.593509+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:19.593693+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:20.593826+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:21.593970+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:22.594539+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:23.594679+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:24.594889+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:25.595156+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:26.595421+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:27.595575+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:28.595759+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:29.595920+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bee53000 session 0x5634c055af00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:30.596113+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:31.596243+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:32.596365+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:33.596493+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:34.596630+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:35.596890+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:36.597140+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:37.597261+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:38.597452+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:39.597575+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:40.597713+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:41.597791+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:42.597926+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:43.598055+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 88.969955444s of 88.974121094s, submitted: 1
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971368 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:44.598162+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:45.598361+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:46.598504+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:47.598608+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:48.598800+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971368 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:49.598871+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:50.598991+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:51.599106+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:52.599245+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:53.599394+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:54.599578+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971368 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:55.599693+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:56.599825+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:57.599950+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:58.600126+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:59.600193+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971368 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:00.600332+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:01.600453+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:02.600563+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:03.600673+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:04.600769+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971368 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:05.600897+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:06.600973+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:07.601096+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be106400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be107400 session 0x5634c06b1680
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1343488 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:08.601255+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.761932373s of 24.764957428s, submitted: 1
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 294912 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:09.601367+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975134 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x179901/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:10.601498+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 16998400 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc1e4000/0x0/0x4ffc00000, data 0x97ba51/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 153 ms_handle_reset con 0x5634be106400 session 0x5634bf7bed20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:11.601627+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 16867328 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be106800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:12.601784+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 16834560 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _renew_subs
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 154 ms_handle_reset con 0x5634be106800 session 0x5634c06e63c0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:13.601907+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:14.602027+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077664 data_alloc: 218103808 data_used: 151552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:15.602169+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:16.602314+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:17.602475+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:18.602632+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:19.602755+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:20.602887+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:21.603035+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:22.603185+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:23.603312+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:24.603469+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf0fc400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.009191513s of 16.191659927s, submitted: 44
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:25.603589+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:26.603732+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:27.603867+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:28.604008+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:29.604139+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:30.604254+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:31.604450+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:32.604597+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:33.604733+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:34.604890+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:35.605028+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:36.605162+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:37.605305+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:38.605625+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:39.605827+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:40.605960+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:41.606104+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:42.606247+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:43.606490+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:44.606620+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:45.606738+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:46.608450+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:47.608711+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:48.609258+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:49.610199+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:50.611051+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:51.611615+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:52.612251+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:53.612925+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:54.613498+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:55.613858+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:56.614160+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf727000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 155 ms_handle_reset con 0x5634bf727000 session 0x5634c0304b40
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf539c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 155 ms_handle_reset con 0x5634bf539c00 session 0x5634bf53bc20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be106400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 155 ms_handle_reset con 0x5634be106400 session 0x5634c06b2780
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:57.614356+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 16785408 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be106800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 155 ms_handle_reset con 0x5634be106800 session 0x5634bfe22000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be107400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:58.614675+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 155 ms_handle_reset con 0x5634be107400 session 0x5634c06b0960
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 93536256 unmapped: 5079040 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf727000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 155 ms_handle_reset con 0x5634bf727000 session 0x5634c06b01e0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf034400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:59.614889+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108216 data_alloc: 234881024 data_used: 11628544
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 93528064 unmapped: 5087232 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _renew_subs
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.016056061s of 35.019523621s, submitted: 1
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:00.615117+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 5062656 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 156 handle_osd_map epochs [156,157], i have 156, src has [1,157]
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _renew_subs
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 156 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 157 ms_handle_reset con 0x5634bf034400 session 0x5634bfdc25a0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf034400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 157 ms_handle_reset con 0x5634bf034400 session 0x5634bfd5ed20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be106400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 157 ms_handle_reset con 0x5634be106400 session 0x5634bf53ba40
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be106800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 157 ms_handle_reset con 0x5634be106800 session 0x5634bf08a3c0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be107400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 157 ms_handle_reset con 0x5634be107400 session 0x5634c06cd860
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:01.615301+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95297536 unmapped: 5488640 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:02.615627+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95297536 unmapped: 5488640 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 157 heartbeat osd_stat(store_statfs(0x4fb8f3000/0x0/0x4ffc00000, data 0x1264eb8/0x1327000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:03.615806+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95297536 unmapped: 5488640 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:04.615982+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151181 data_alloc: 234881024 data_used: 11628544
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95297536 unmapped: 5488640 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _renew_subs
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:05.616111+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95313920 unmapped: 5472256 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f3000/0x0/0x4ffc00000, data 0x1264eb8/0x1327000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:06.616460+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95313920 unmapped: 5472256 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf727000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf727000 session 0x5634c06e6000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:07.616630+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95305728 unmapped: 5480448 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf727000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be106400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:08.617006+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f1000/0x0/0x4ffc00000, data 0x1266e8a/0x132a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95313920 unmapped: 5472256 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:09.617175+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172443 data_alloc: 234881024 data_used: 14356480
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 2924544 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:10.617463+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:11.617633+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f1000/0x0/0x4ffc00000, data 0x1266e8a/0x132a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:12.617976+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:13.618262+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f1000/0x0/0x4ffc00000, data 0x1266e8a/0x132a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:14.618441+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184603 data_alloc: 234881024 data_used: 16195584
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f1000/0x0/0x4ffc00000, data 0x1266e8a/0x132a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:15.618598+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:16.618849+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f1000/0x0/0x4ffc00000, data 0x1266e8a/0x132a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:17.619022+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:18.619227+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:19.619379+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184755 data_alloc: 234881024 data_used: 16199680
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f1000/0x0/0x4ffc00000, data 0x1266e8a/0x132a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.278177261s of 20.407505035s, submitted: 45
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:20.619495+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107520000 unmapped: 2703360 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:21.619648+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 1523712 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:22.619778+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 1523712 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:23.619907+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 1523712 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:24.619996+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263495 data_alloc: 234881024 data_used: 18071552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 1523712 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:25.620108+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 1523712 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9fd3000/0x0/0x4ffc00000, data 0x19e5e8a/0x1aa9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:26.620269+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 1523712 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:27.620428+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107077632 unmapped: 3145728 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:28.620567+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107077632 unmapped: 3145728 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:29.620735+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256551 data_alloc: 234881024 data_used: 18071552
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 3137536 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9fd0000/0x0/0x4ffc00000, data 0x19e8e8a/0x1aac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:30.620833+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9fd0000/0x0/0x4ffc00000, data 0x19e8e8a/0x1aac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 3137536 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:31.620948+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:32.621109+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9fd0000/0x0/0x4ffc00000, data 0x19e8e8a/0x1aac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:33.621212+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:34.621389+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9fd0000/0x0/0x4ffc00000, data 0x19e8e8a/0x1aac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257615 data_alloc: 234881024 data_used: 18145280
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.662779808s of 14.812747955s, submitted: 82
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:35.621586+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:36.621699+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:37.621812+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:38.621958+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:39.622103+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257839 data_alloc: 234881024 data_used: 18145280
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9fcf000/0x0/0x4ffc00000, data 0x19e9e8a/0x1aad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:40.622203+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:41.622333+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:42.622501+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 3088384 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:43.622652+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634c06b2f00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf224d20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be0fb800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634bfcb32c0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 3104768 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634c06b23c0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0688000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:44.622793+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0688000 session 0x5634bf53ab40
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257991 data_alloc: 234881024 data_used: 18673664
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 1818624 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be0fb800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634be148780
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.993793488s of 10.001093864s, submitted: 2
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bcfceb40
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bcbf9c20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bd20a780
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0688400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:45.622904+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0688400 session 0x5634be1adc20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be0fb800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634bfd5f0e0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108380160 unmapped: 8273920 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f977a000/0x0/0x4ffc00000, data 0x1e2ee8a/0x1ef2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:46.623016+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108380160 unmapped: 8273920 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:47.623110+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108380160 unmapped: 8273920 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf7c41e0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:48.623250+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108380160 unmapped: 8273920 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:49.623394+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bd4cfe00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298504 data_alloc: 234881024 data_used: 18673664
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 8290304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf7be1e0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:50.623565+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0688800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0688800 session 0x5634bf4421e0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107683840 unmapped: 8970240 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be0fb800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:51.623716+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107945984 unmapped: 8708096 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9779000/0x0/0x4ffc00000, data 0x1e2ee99/0x1ef3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:52.623887+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 6242304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9779000/0x0/0x4ffc00000, data 0x1e2ee99/0x1ef3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:53.624378+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 6242304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:54.624519+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323629 data_alloc: 234881024 data_used: 21921792
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 6242304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:55.624676+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 6242304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:56.624859+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 6242304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:57.625004+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 6242304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:58.625255+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9779000/0x0/0x4ffc00000, data 0x1e2ee99/0x1ef3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110419968 unmapped: 6234112 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:59.625382+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323629 data_alloc: 234881024 data_used: 21921792
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110419968 unmapped: 6234112 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9779000/0x0/0x4ffc00000, data 0x1e2ee99/0x1ef3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:00.630225+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 6201344 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:01.630455+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 6201344 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:02.630653+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 6201344 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:03.630775+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.279289246s of 18.403636932s, submitted: 31
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113623040 unmapped: 4358144 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:04.630943+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389915 data_alloc: 234881024 data_used: 22224896
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8f55000/0x0/0x4ffc00000, data 0x264ae99/0x270f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,1,1])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 4030464 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:05.631125+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 3948544 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:06.631303+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8f4a000/0x0/0x4ffc00000, data 0x2654e99/0x2719000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 3940352 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:07.631450+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 5496832 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:08.631621+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 5496832 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:09.631790+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1390001 data_alloc: 234881024 data_used: 22290432
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 5496832 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8f52000/0x0/0x4ffc00000, data 0x2654e99/0x2719000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:10.631985+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 5496832 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:11.632138+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634c02cbe00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf08a1e0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110690304 unmapped: 7290880 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf18c780
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:12.632290+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 7528448 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:13.632474+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 7528448 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:14.632622+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270311 data_alloc: 234881024 data_used: 18673664
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 7528448 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:15.632768+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 7528448 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x19e9e8a/0x1aad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:16.633166+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 7528448 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:17.633387+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.905948639s of 14.233164787s, submitted: 109
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf727000 session 0x5634c06b3e00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be106400 session 0x5634bcfe1c20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 7520256 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be0fb800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:18.633732+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634bfe2f680
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 10739712 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:19.633884+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 10739712 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:20.634106+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:21.634346+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:22.634507+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:23.634654+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:24.634805+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:25.634973+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:26.635099+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:27.635224+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:28.635453+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:29.818128+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:30.818270+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:31.818443+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:32.818616+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:33.818732+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:34.818845+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:35.818983+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:36.819112+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:37.819246+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:38.819495+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:39.819621+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:40.819764+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:41.819890+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:42.819989+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:43.820127+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:44.820241+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:45.820363+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:46.820515+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:47.820694+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:48.820900+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:49.821038+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:50.821151+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:51.821321+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf443860
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf443a40
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf727000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf727000 session 0x5634bf442d20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bd462f00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.204719543s of 34.300907135s, submitted: 31
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bd462b40
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107069440 unmapped: 23027712 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be0fb800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634bcfce5a0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bcfcf4a0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bcfce780
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf727000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf727000 session 0x5634c06d7e00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:52.821509+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106790912 unmapped: 23306240 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:53.821651+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106790912 unmapped: 23306240 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:54.821874+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf727000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf727000 session 0x5634bf7c0d20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225465 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106790912 unmapped: 23306240 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634be0fb800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634bf7c1680
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:55.822023+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf7c0000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106790912 unmapped: 23306240 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9d28000/0x0/0x4ffc00000, data 0x187eefc/0x1944000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf7c03c0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:56.822228+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 23289856 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0688c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:57.822423+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 23683072 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:58.822635+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 18817024 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:59.822769+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289986 data_alloc: 234881024 data_used: 21778432
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 18808832 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:00.822957+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 18808832 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:01.823115+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9d28000/0x0/0x4ffc00000, data 0x187eefc/0x1944000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 18808832 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:02.823236+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 18808832 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:03.823366+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 18808832 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:04.823471+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289986 data_alloc: 234881024 data_used: 21778432
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 18808832 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:05.823614+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 18800640 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:06.823881+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 18800640 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:07.824025+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9d28000/0x0/0x4ffc00000, data 0x187eefc/0x1944000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 18800640 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:08.824202+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.452789307s of 16.602790833s, submitted: 56
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 13295616 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:09.824360+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1373192 data_alloc: 234881024 data_used: 22908928
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 10567680 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:10.824533+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f949c000/0x0/0x4ffc00000, data 0x2101efc/0x21c7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 10567680 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:11.824701+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 10567680 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:12.824907+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 10567680 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:13.825035+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f949c000/0x0/0x4ffc00000, data 0x2101efc/0x21c7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 10567680 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f949c000/0x0/0x4ffc00000, data 0x2101efc/0x21c7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:14.825178+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1373648 data_alloc: 234881024 data_used: 22921216
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119537664 unmapped: 10559488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:15.825326+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 12402688 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:16.825471+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9484000/0x0/0x4ffc00000, data 0x2122efc/0x21e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 12402688 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:17.825598+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 12402688 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:18.825775+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 12402688 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9484000/0x0/0x4ffc00000, data 0x2122efc/0x21e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:19.825935+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367704 data_alloc: 234881024 data_used: 22933504
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 12402688 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:20.826104+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 12402688 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:21.826230+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:22.826349+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:23.826507+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.744583130s of 15.027527809s, submitted: 96
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9484000/0x0/0x4ffc00000, data 0x2122efc/0x21e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:24.826658+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367784 data_alloc: 234881024 data_used: 22933504
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:25.826820+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f947e000/0x0/0x4ffc00000, data 0x2128efc/0x21ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:26.826985+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:27.827119+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:28.827330+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 12386304 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:29.827474+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f947e000/0x0/0x4ffc00000, data 0x2128efc/0x21ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367784 data_alloc: 234881024 data_used: 22933504
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 12386304 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:30.827623+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 12386304 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:31.827752+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 12369920 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:32.827896+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 12369920 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f947b000/0x0/0x4ffc00000, data 0x212befc/0x21f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:33.828035+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 12369920 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:34.828189+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369392 data_alloc: 234881024 data_used: 23019520
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 12369920 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:35.828564+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 12369920 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f947b000/0x0/0x4ffc00000, data 0x212befc/0x21f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:36.828690+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.042746544s of 13.056042671s, submitted: 4
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 12345344 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:37.828823+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 12345344 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:38.828969+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf1da1e0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0688c00 session 0x5634bd4cef00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf4c7680
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:39.829100+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149888 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:40.829266+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b0000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:41.829486+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:42.829720+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:43.829866+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:44.830057+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b0000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149888 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:45.830201+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:46.830328+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:47.830468+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:48.830693+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:49.830850+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b0000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149888 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:50.831034+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:51.831164+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:52.831359+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:53.831548+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b0000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:54.831703+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149888 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:55.831854+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:56.832018+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.141063690s of 20.330921173s, submitted: 61
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c06b2b40
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634c06b23c0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf727000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf727000 session 0x5634c06b2d20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c06b3680
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bcfcfe00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 21381120 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:57.832177+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 21381120 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:58.832452+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 21381120 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:59.832611+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157691 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 21381120 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:00.832884+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0688c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0688c00 session 0x5634bf7be5a0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 21381120 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:01.833045+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0689800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0689c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 21372928 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:02.833222+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 8425 writes, 31K keys, 8425 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8425 writes, 2022 syncs, 4.17 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1516 writes, 4428 keys, 1516 commit groups, 1.0 writes per commit group, ingest: 4.17 MB, 0.01 MB/s
                                           Interval WAL: 1516 writes, 667 syncs, 2.27 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:03.833370+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:04.833624+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161623 data_alloc: 234881024 data_used: 12693504
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:05.833809+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:06.834003+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:07.834181+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:08.834482+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:09.834697+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161623 data_alloc: 234881024 data_used: 12693504
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:10.834881+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:11.835059+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:12.835461+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:13.835722+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.318338394s of 17.361791611s, submitted: 13
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:14.835901+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109363200 unmapped: 20733952 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213389 data_alloc: 234881024 data_used: 12693504
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:15.836033+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 21151744 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:16.836180+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109002752 unmapped: 21094400 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:17.836336+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109338624 unmapped: 20758528 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:18.836530+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109346816 unmapped: 20750336 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:19.836671+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109346816 unmapped: 20750336 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x1557e8a/0x161b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1220401 data_alloc: 234881024 data_used: 12890112
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:20.836826+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 20766720 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:21.836985+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 20766720 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:22.837199+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109338624 unmapped: 20758528 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa04e000/0x0/0x4ffc00000, data 0x155ae8a/0x161e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:23.837335+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:24.837502+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218617 data_alloc: 234881024 data_used: 12890112
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:25.837651+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:26.837813+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa04e000/0x0/0x4ffc00000, data 0x155ae8a/0x161e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:27.837965+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:28.838191+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.672043800s of 14.786133766s, submitted: 55
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa04e000/0x0/0x4ffc00000, data 0x155ae8a/0x161e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:29.838333+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218857 data_alloc: 234881024 data_used: 12890112
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:30.838468+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109281280 unmapped: 20815872 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:31.839035+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109281280 unmapped: 20815872 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:32.839989+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109281280 unmapped: 20815872 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634beabcc00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634bf08b860
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634bf08a780
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634beabcc00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634bf08ab40
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf08bc20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf08ba40
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0689000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689000 session 0x5634bf53b4a0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0688c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0688c00 session 0x5634bf53ab40
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634beabcc00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:33.840153+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634bf53a1e0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf53ad20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:34.840953+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9bc9000/0x0/0x4ffc00000, data 0x19dee9a/0x1aa3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253835 data_alloc: 234881024 data_used: 12890112
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:35.841168+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9bc9000/0x0/0x4ffc00000, data 0x19dee9a/0x1aa3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:36.841467+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:37.841629+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:38.841849+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9bc9000/0x0/0x4ffc00000, data 0x19dee9a/0x1aa3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:39.841994+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf53af00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253835 data_alloc: 234881024 data_used: 12890112
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0689000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689000 session 0x5634bf53bc20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:40.842115+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3c00 session 0x5634bf53ba40
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634beabcc00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.216509819s of 12.251233101s, submitted: 6
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634c06b0f00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:41.842323+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109617152 unmapped: 20480000 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:42.843022+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109625344 unmapped: 20471808 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:43.843161+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111108096 unmapped: 18989056 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:44.843515+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9ba4000/0x0/0x4ffc00000, data 0x1a02eaa/0x1ac8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291410 data_alloc: 234881024 data_used: 17616896
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:45.843652+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9ba4000/0x0/0x4ffc00000, data 0x1a02eaa/0x1ac8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:46.843819+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9ba4000/0x0/0x4ffc00000, data 0x1a02eaa/0x1ac8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:47.843935+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:48.844328+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:49.844504+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9ba3000/0x0/0x4ffc00000, data 0x1a02eaa/0x1ac8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:50.844690+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291578 data_alloc: 234881024 data_used: 17616896
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:51.844822+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:52.845102+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:53.845229+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.640722275s of 12.708586693s, submitted: 10
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9ba3000/0x0/0x4ffc00000, data 0x1a02eaa/0x1ac8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:54.845352+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 18825216 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:55.845463+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298216 data_alloc: 234881024 data_used: 17735680
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:56.845649+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9b5e000/0x0/0x4ffc00000, data 0x1a48eaa/0x1b0e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:57.845772+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:58.846030+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:59.846160+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:00.846283+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1300012 data_alloc: 234881024 data_used: 17735680
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9b5e000/0x0/0x4ffc00000, data 0x1a48eaa/0x1b0e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:01.846551+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bfe221e0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bdc7c000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:02.846675+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0689000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109428736 unmapped: 20668416 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689000 session 0x5634c029ad20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:03.846816+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 20660224 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa04d000/0x0/0x4ffc00000, data 0x155be8a/0x161f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:04.847049+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 20660224 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa04d000/0x0/0x4ffc00000, data 0x155be8a/0x161f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:05.847363+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224813 data_alloc: 234881024 data_used: 12890112
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 20660224 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:06.847609+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.472726822s of 12.585700035s, submitted: 33
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689800 session 0x5634c06b0b40
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689c00 session 0x5634bf7c1680
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 20660224 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0689800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689800 session 0x5634bd20b2c0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:07.847810+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa04d000/0x0/0x4ffc00000, data 0x155be8a/0x161f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 21446656 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:08.848042+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 21446656 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:09.848234+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 21446656 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:10.848438+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162905 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:11.848601+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:12.848729+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:13.848944+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:14.849227+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:15.849474+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162905 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:16.850009+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:17.850238+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:18.850447+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:19.850719+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:20.850934+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162905 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.771172523s of 14.867496490s, submitted: 30
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:21.851078+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 22495232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:22.851248+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108675072 unmapped: 21422080 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:23.851483+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108904448 unmapped: 21192704 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:24.851699+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109068288 unmapped: 21028864 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:25.851914+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162905 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109076480 unmapped: 21020672 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:26.852088+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109076480 unmapped: 21020672 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:27.852204+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109076480 unmapped: 21020672 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634beabcc00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634bf08b2c0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf7c05a0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bd463e00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634c06b3680
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634beabcc00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634bef0d680
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:28.852375+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109731840 unmapped: 24567808 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:29.852577+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109740032 unmapped: 24559616 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:30.852724+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212735 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109740032 unmapped: 24559616 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:31.852889+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109740032 unmapped: 24559616 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:32.853044+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109740032 unmapped: 24559616 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:33.853232+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109764608 unmapped: 24535040 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bcfe34a0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:34.853416+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0689800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0689c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109780992 unmapped: 24518656 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:35.853677+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212735 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 24141824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:36.853808+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 21692416 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:37.853937+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 21659648 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:38.854107+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 21659648 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:39.854226+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 21626880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:40.854370+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257575 data_alloc: 234881024 data_used: 18759680
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 21626880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:41.854459+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 21626880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:42.854601+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 21626880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:43.854719+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 21626880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:44.855065+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 21626880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:45.855181+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257575 data_alloc: 234881024 data_used: 18759680
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112680960 unmapped: 21618688 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:46.855427+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.294692993s of 25.236562729s, submitted: 260
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119414784 unmapped: 14884864 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:47.855616+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 12951552 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:48.855785+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 12951552 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:49.855978+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121380864 unmapped: 12918784 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:50.856133+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329075 data_alloc: 234881024 data_used: 20541440
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121380864 unmapped: 12918784 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:51.856276+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 12877824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:52.856432+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 12877824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:53.856669+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 12877824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:54.856806+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121454592 unmapped: 12845056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:55.856954+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329075 data_alloc: 234881024 data_used: 20541440
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121454592 unmapped: 12845056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:56.857090+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121454592 unmapped: 12845056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:57.857254+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 12836864 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:58.857444+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 12836864 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:59.857569+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bd23a800 session 0x5634bcfe05a0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0689000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 12836864 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:00.857703+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329379 data_alloc: 234881024 data_used: 20549632
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 12836864 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:01.857906+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 12828672 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:02.858097+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 12828672 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:03.858262+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 12820480 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:04.858470+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 12820480 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:05.858607+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329379 data_alloc: 234881024 data_used: 20549632
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 12820480 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:06.858780+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 12820480 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:07.858974+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 12820480 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:08.859166+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 12820480 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:09.859344+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 12804096 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:10.859484+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330139 data_alloc: 234881024 data_used: 20570112
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 12804096 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:11.859622+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 12804096 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:12.859788+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 12804096 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.572525024s of 26.743309021s, submitted: 82
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:13.859932+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3800 session 0x5634c06e6960
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3000 session 0x5634c06b05a0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634beabcc00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634bf08a1e0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bcfcf860
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bfe23c20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 15040512 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:14.860061+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 15040512 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:15.860184+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350053 data_alloc: 234881024 data_used: 20570112
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 15040512 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:16.860507+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f972c000/0x0/0x4ffc00000, data 0x1e7ce8a/0x1f40000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 15040512 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:17.860704+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 15032320 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:18.860877+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 15032320 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:19.861022+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f972a000/0x0/0x4ffc00000, data 0x1e7de8a/0x1f41000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 15032320 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3800 session 0x5634bfd5e780
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:20.861183+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350693 data_alloc: 234881024 data_used: 20570112
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 15032320 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:21.861336+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 14876672 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:22.861520+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 12206080 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:23.861627+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:24.861817+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:25.861993+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f972a000/0x0/0x4ffc00000, data 0x1e7de8a/0x1f41000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1376837 data_alloc: 234881024 data_used: 24346624
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:26.862146+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:27.862825+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:28.862990+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:29.863136+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:30.863234+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1376837 data_alloc: 234881024 data_used: 24346624
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f972a000/0x0/0x4ffc00000, data 0x1e7de8a/0x1f41000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:31.863385+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.929925919s of 18.974597931s, submitted: 10
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:32.863606+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125714432 unmapped: 8585216 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9393000/0x0/0x4ffc00000, data 0x220de8a/0x22d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:33.863778+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 9084928 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:34.863870+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 9084928 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:35.864070+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412031 data_alloc: 234881024 data_used: 24842240
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 9084928 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:36.864214+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 9052160 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:37.864355+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 9052160 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:38.864613+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 9052160 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:39.864765+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 9052160 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:40.864903+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412047 data_alloc: 234881024 data_used: 24842240
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 9043968 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:41.865090+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 9043968 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:42.865260+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 9043968 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:43.865457+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 9043968 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:44.865594+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 9043968 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:45.865740+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412047 data_alloc: 234881024 data_used: 24842240
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 9035776 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:46.865892+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 9035776 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:47.866111+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 9035776 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:48.866479+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.664098740s of 16.812852859s, submitted: 44
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125419520 unmapped: 8880128 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:49.866626+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125419520 unmapped: 8880128 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:50.866787+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413159 data_alloc: 234881024 data_used: 24825856
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 8871936 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:51.866982+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 8871936 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:52.867108+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 8871936 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:53.867297+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 8871936 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:54.867465+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 8871936 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:55.867604+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413159 data_alloc: 234881024 data_used: 24825856
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 8871936 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:56.867734+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125435904 unmapped: 8863744 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:57.867930+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125444096 unmapped: 8855552 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:58.868127+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125444096 unmapped: 8855552 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:59.868338+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125444096 unmapped: 8855552 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:00.868470+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413159 data_alloc: 234881024 data_used: 24825856
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125444096 unmapped: 8855552 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:01.868627+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125452288 unmapped: 8847360 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:02.868775+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125452288 unmapped: 8847360 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:03.868921+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125452288 unmapped: 8847360 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:04.869121+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.351060867s of 15.361025810s, submitted: 14
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125509632 unmapped: 8790016 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:05.869313+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411143 data_alloc: 234881024 data_used: 24825856
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125509632 unmapped: 8790016 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:06.869500+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 8781824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:07.869700+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 8781824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:08.869916+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 8781824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:09.870099+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 8781824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:10.870301+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411311 data_alloc: 234881024 data_used: 24825856
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 8781824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:11.870480+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 8781824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:12.870626+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125526016 unmapped: 8773632 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:13.870789+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125526016 unmapped: 8773632 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:14.870923+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125526016 unmapped: 8773632 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:15.871059+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: mgrc ms_handle_reset ms_handle_reset con 0x5634bddcfc00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3769522832
Nov 24 10:15:34 compute-1 ceph-osd[77497]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3769522832,v1:192.168.122.100:6801/3769522832]
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: get_auth_request con 0x5634c0688c00 auth_method 0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: mgrc handle_mgr_configure stats_period=5
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411311 data_alloc: 234881024 data_used: 24825856
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 8691712 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:16.871225+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bd1e9000 session 0x5634bf4570e0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf538400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf538800 session 0x5634bfa2bc20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bd1e9000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 8691712 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:17.871390+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 8691712 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:18.871554+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 8691712 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:19.871724+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 8691712 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:20.871873+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411311 data_alloc: 234881024 data_used: 24825856
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 8683520 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:21.872068+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 8683520 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:22.872227+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 8683520 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:23.872366+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125624320 unmapped: 8675328 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:24.873109+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634c06e6d20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf225680
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125624320 unmapped: 8675328 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:25.873632+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.096628189s of 21.114942551s, submitted: 5
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327619 data_alloc: 234881024 data_used: 20619264
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:26.873825+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf82ed20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:27.873977+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:28.874258+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9acb000/0x0/0x4ffc00000, data 0x1adde8a/0x1ba1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:29.874426+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:30.874579+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327619 data_alloc: 234881024 data_used: 20619264
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:31.874821+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:32.874983+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:33.875139+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689800 session 0x5634bcfe32c0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689c00 session 0x5634be1ad680
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bd20b680
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:34.875335+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9acb000/0x0/0x4ffc00000, data 0x1adde8a/0x1ba1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:35.875483+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176425 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:36.875644+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:37.875814+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:38.876033+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:39.876181+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:40.876335+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176425 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:41.876604+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:42.876815+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:43.876982+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:44.877143+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:45.877327+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176425 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:46.877543+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:47.877783+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:48.877973+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:49.878133+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:50.878315+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176425 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:51.878466+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:52.878613+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:53.878791+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:54.878949+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:55.879087+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176425 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:56.879284+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:57.879494+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:58.879675+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:59.879841+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.783638000s of 33.895526886s, submitted: 33
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634c06b2780
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0689800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689800 session 0x5634bf7c4b40
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634bf7c41e0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf7c52c0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634c02e0b40
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:00.879960+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 17965056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1220693 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1390e8a/0x1454000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:01.880106+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 17965056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1390e8a/0x1454000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:02.880271+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 17965056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:03.880453+0000)
Nov 24 10:15:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 24 10:15:34 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1486480020' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 17965056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:04.880787+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 17965056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c02e01e0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf7be960
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:05.880924+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 17948672 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1220693 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0689800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689800 session 0x5634bf7bfc20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:06.881061+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634bf7bf4a0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 17948672 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:07.881228+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 17948672 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1390e8a/0x1454000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:08.881437+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 17358848 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:09.881595+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 17358848 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:10.881762+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 17358848 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1390e8a/0x1454000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260866 data_alloc: 234881024 data_used: 18022400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:11.881986+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 17358848 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634bf82eb40
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.973909378s of 12.041707993s, submitted: 15
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf82fc20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:12.882126+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf7c52c0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:13.882264+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:14.882432+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:15.882586+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181186 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:16.882760+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:17.882892+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:18.883015+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:19.883145+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:20.883279+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181186 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:21.883384+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:22.883510+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:23.883601+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:24.883712+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:25.883815+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181186 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:26.883946+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:27.884111+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:28.884344+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf53a000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c0689800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689800 session 0x5634bcfcf860
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf08a1e0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634c06b05a0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.199029922s of 17.423206329s, submitted: 27
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bfcb32c0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634bcfe14a0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3800 session 0x5634c06e72c0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3800 session 0x5634c06e6f00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c029a5a0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:29.884484+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 23748608 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:30.884558+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 23748608 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa1de000/0x0/0x4ffc00000, data 0x13c9e9a/0x148e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226040 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:31.884635+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 23748608 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:32.884796+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 23748608 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf18de00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:33.885019+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bdc7c780
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 23732224 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c23a3400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634bf444000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:34.885169+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c02e1860
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 23724032 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa1de000/0x0/0x4ffc00000, data 0x13c9e9a/0x148e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:35.885337+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 23724032 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1227854 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:36.885558+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 23715840 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa1dd000/0x0/0x4ffc00000, data 0x13c9eaa/0x148f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:37.885737+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 22036480 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:38.886015+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 22036480 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:39.886231+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 22036480 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf82f0e0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf53bc20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf726800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.103581429s of 11.147413254s, submitted: 8
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf726800 session 0x5634bf7c52c0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:40.886452+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186069 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:41.886591+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:42.886726+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:43.886879+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:44.887006+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:45.887178+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186069 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:46.887324+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:47.887506+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:48.887739+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:49.887865+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:50.887993+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186069 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:51.888178+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:52.888305+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:53.888467+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:54.888611+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:55.888745+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186069 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:56.888894+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:57.889044+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:58.889223+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:59.889364+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:00.889502+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186069 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:01.889642+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:02.889797+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:03.890017+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:04.890170+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:05.890312+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:06.890493+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186069 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c025e400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e400 session 0x5634bf7bf4a0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c025e400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e400 session 0x5634bfa2a1e0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bfa2bc20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:07.890597+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bfa2a3c0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf726800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.712280273s of 27.752235413s, submitted: 13
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:08.890788+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 29245440 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa296000/0x0/0x4ffc00000, data 0x1311e9a/0x13d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf726800 session 0x5634bfa2a000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfa26c00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf2252c0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf224960
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf08a1e0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf726800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf726800 session 0x5634bf7bef00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:09.890921+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:10.891065+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:11.891190+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228500 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa296000/0x0/0x4ffc00000, data 0x1311e9a/0x13d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:12.891311+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:13.891436+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:14.891588+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa296000/0x0/0x4ffc00000, data 0x1311e9a/0x13d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:15.891740+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:16.891876+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228500 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:17.892061+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c025e000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:18.892287+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.442607880s of 10.648617744s, submitted: 19
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e000 session 0x5634bdc7c780
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa296000/0x0/0x4ffc00000, data 0x1311e9a/0x13d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113442816 unmapped: 29261824 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0a800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0b800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:19.892577+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113500160 unmapped: 29204480 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:20.892783+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 28254208 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:21.893213+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259524 data_alloc: 234881024 data_used: 16396288
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa272000/0x0/0x4ffc00000, data 0x1335e9a/0x13fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:22.893475+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:23.893851+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:24.894131+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa272000/0x0/0x4ffc00000, data 0x1335e9a/0x13fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:25.894429+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:26.894601+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259524 data_alloc: 234881024 data_used: 16396288
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:27.894925+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa272000/0x0/0x4ffc00000, data 0x1335e9a/0x13fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:28.895244+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:29.895505+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa272000/0x0/0x4ffc00000, data 0x1335e9a/0x13fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 27885568 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:30.895777+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 27885568 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:31.895924+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.894562721s of 12.903012276s, submitted: 2
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261912 data_alloc: 234881024 data_used: 16449536
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 27590656 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:32.896144+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 27582464 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:33.896358+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:34.896583+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:35.896750+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1380e9a/0x1445000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1380e9a/0x1445000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:36.896886+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274846 data_alloc: 234881024 data_used: 16560128
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:37.897009+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:38.897187+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:39.897470+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1380e9a/0x1445000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:40.897910+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:41.898163+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274846 data_alloc: 234881024 data_used: 16560128
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:42.898432+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:43.898630+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:44.898934+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1380e9a/0x1445000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:45.899121+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.734338760s of 14.816822052s, submitted: 27
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634c02e01e0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0b800 session 0x5634bcfe1a40
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:46.899457+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0a800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270054 data_alloc: 234881024 data_used: 16560128
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bf08be00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:47.899649+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:48.899864+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:49.900009+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:50.900228+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:51.900455+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193664 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:52.900674+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:53.900861+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:54.901032+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:55.901161+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:56.901320+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193664 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:57.901442+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:58.901614+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:59.902015+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:00.902197+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:01.902360+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193664 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:02.902520+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:03.902636+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:04.902800+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:05.902913+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:06.903078+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193664 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:07.903280+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.250144958s of 21.353006363s, submitted: 31
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bcfe2b40
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf1da5a0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf726800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf726800 session 0x5634bfcb30e0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf82eb40
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634c029af00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:08.903484+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa2aa000/0x0/0x4ffc00000, data 0x12fee8a/0x13c2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:09.903661+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:10.903894+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:11.904020+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237405 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:12.904267+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:13.904491+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0a800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:14.904658+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bfa2ab40
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa2aa000/0x0/0x4ffc00000, data 0x12fee8a/0x13c2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0b800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c025e000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115171328 unmapped: 27533312 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:15.904831+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa286000/0x0/0x4ffc00000, data 0x1322e8a/0x13e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 27664384 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:16.905021+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269042 data_alloc: 234881024 data_used: 16138240
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 26927104 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:17.905216+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 26927104 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:18.905512+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa286000/0x0/0x4ffc00000, data 0x1322e8a/0x13e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 26927104 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa286000/0x0/0x4ffc00000, data 0x1322e8a/0x13e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:19.905686+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 26927104 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:20.905842+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115785728 unmapped: 26918912 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:21.906026+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269042 data_alloc: 234881024 data_used: 16138240
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115785728 unmapped: 26918912 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:22.906167+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115785728 unmapped: 26918912 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c08ea000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08ea000 session 0x5634bf445a40
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:23.906294+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c08ea400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08ea400 session 0x5634bf7bfe00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bd4614a0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf029e00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0a800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.969479561s of 16.080394745s, submitted: 32
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bfe225a0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c08ea000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08ea000 session 0x5634c06cd680
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa286000/0x0/0x4ffc00000, data 0x1322e8a/0x13e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c08ea800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08ea800 session 0x5634bf08a960
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c02e14a0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634c06e65a0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 26763264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:24.906435+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 26763264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:25.906604+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 26763264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:26.906809+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303056 data_alloc: 234881024 data_used: 16138240
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 26763264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:27.907035+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 21438464 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:28.907899+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9af9000/0x0/0x4ffc00000, data 0x1aa6e9a/0x1b6b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 22560768 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:29.908084+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 22560768 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:30.908289+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 22560768 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0a800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bfcb25a0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:31.908478+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336043 data_alloc: 234881024 data_used: 17305600
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 120160256 unmapped: 22544384 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:32.908642+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c08ea000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c08eac00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 22536192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:33.908782+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9af7000/0x0/0x4ffc00000, data 0x1ab0e9a/0x1b75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 21209088 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:34.908911+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.558697701s of 10.774977684s, submitted: 67
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:35.909039+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:36.909229+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365375 data_alloc: 234881024 data_used: 21561344
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9af7000/0x0/0x4ffc00000, data 0x1ab0e9a/0x1b75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:37.909384+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:38.909603+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:39.909735+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:40.909876+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:41.910013+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365375 data_alloc: 234881024 data_used: 21561344
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:42.910151+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9af7000/0x0/0x4ffc00000, data 0x1ab0e9a/0x1b75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:43.910296+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:44.910490+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.470090866s of 10.473713875s, submitted: 1
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 19791872 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:45.910636+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 123043840 unmapped: 19660800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:46.910801+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1374765 data_alloc: 234881024 data_used: 21581824
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 18587648 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:47.911461+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 18587648 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:48.911648+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 18587648 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a0f000/0x0/0x4ffc00000, data 0x1b90e9a/0x1c55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:49.911780+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124149760 unmapped: 18554880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:50.911948+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124149760 unmapped: 18554880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:51.912097+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a0f000/0x0/0x4ffc00000, data 0x1b90e9a/0x1c55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382269 data_alloc: 234881024 data_used: 21577728
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124149760 unmapped: 18554880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:52.912234+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a0f000/0x0/0x4ffc00000, data 0x1b90e9a/0x1c55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124149760 unmapped: 18554880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:53.912384+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124149760 unmapped: 18554880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:54.912557+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124149760 unmapped: 18554880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:55.912682+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a0f000/0x0/0x4ffc00000, data 0x1b90e9a/0x1c55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124182528 unmapped: 18522112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:56.912805+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08ea000 session 0x5634bcbf9e00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.591684341s of 11.705703735s, submitted: 37
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08eac00 session 0x5634c06b03c0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1377601 data_alloc: 234881024 data_used: 21577728
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c08eac00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 20422656 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08eac00 session 0x5634bf7be1e0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:57.912927+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 20389888 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:58.914917+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9f1c000/0x0/0x4ffc00000, data 0x168ce8a/0x1750000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 20389888 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:59.917494+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 20389888 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:00.917705+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0b800 session 0x5634bf4450e0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e000 session 0x5634c06e72c0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119160832 unmapped: 23543808 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf2245a0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:01.919595+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207973 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:02.920917+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:03.922353+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:04.923563+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:05.924390+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:06.924840+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207973 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:07.925048+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:08.925877+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:09.926483+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:10.926699+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:11.926892+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207973 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:12.927235+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:13.927499+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:14.927768+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:15.927943+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:16.928165+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207973 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:17.928353+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:18.928827+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:19.929125+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:20.929561+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:21.929858+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf038400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bcfce780
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bd463e00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0b800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0b800 session 0x5634bd4632c0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207973 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c025e000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e000 session 0x5634bf443c20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c08eac00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.324642181s of 25.657859802s, submitted: 76
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,0,0,2])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 17776640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:22.930030+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08eac00 session 0x5634be1ada40
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0a800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bf7c10e0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0a800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bf53a3c0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf53b4a0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0b800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0b800 session 0x5634bd031680
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:23.930289+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:24.930546+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:25.930753+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:26.930951+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285564 data_alloc: 234881024 data_used: 12169216
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:27.931219+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a11000/0x0/0x4ffc00000, data 0x1786e9a/0x184b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:28.931377+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c025e000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e000 session 0x5634bd20b680
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c08eac00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08eac00 session 0x5634bd20bc20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:29.931629+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a11000/0x0/0x4ffc00000, data 0x1786e9a/0x184b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:30.931898+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c08eac00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08eac00 session 0x5634bd20b860
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c029b2c0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:31.932035+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118710272 unmapped: 23994368 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0a800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bfc0b800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287325 data_alloc: 234881024 data_used: 12169216
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:32.932230+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:33.932369+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:34.932541+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:35.932743+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a11000/0x0/0x4ffc00000, data 0x1786e9a/0x184b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:36.932973+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353141 data_alloc: 234881024 data_used: 21815296
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:37.933145+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:38.933447+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a11000/0x0/0x4ffc00000, data 0x1786e9a/0x184b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:39.933692+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a11000/0x0/0x4ffc00000, data 0x1786e9a/0x184b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:40.933956+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:41.934357+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353141 data_alloc: 234881024 data_used: 21815296
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:42.934696+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.633270264s of 20.794521332s, submitted: 35
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:43.934851+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129015808 unmapped: 13688832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:44.934949+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 128851968 unmapped: 13852672 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8f68000/0x0/0x4ffc00000, data 0x2227e9a/0x22ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:45.935092+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 128991232 unmapped: 13713408 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:46.935346+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 128991232 unmapped: 13713408 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8ebf000/0x0/0x4ffc00000, data 0x22d8e9a/0x239d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457277 data_alloc: 234881024 data_used: 22888448
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:47.935487+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 13680640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:48.935671+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 13680640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:49.935780+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129032192 unmapped: 13672448 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:50.935998+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129105920 unmapped: 13598720 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8e9b000/0x0/0x4ffc00000, data 0x22fce9a/0x23c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:51.936167+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129105920 unmapped: 13598720 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1454133 data_alloc: 234881024 data_used: 22888448
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:52.936298+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129105920 unmapped: 13598720 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:53.936464+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129114112 unmapped: 13590528 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:54.936593+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129114112 unmapped: 13590528 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.613185883s of 11.909707069s, submitted: 129
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8e9b000/0x0/0x4ffc00000, data 0x22fce9a/0x23c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:55.936751+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 13516800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:56.936868+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 13516800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8e94000/0x0/0x4ffc00000, data 0x2303e9a/0x23c8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1453885 data_alloc: 234881024 data_used: 22888448
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:57.936981+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 13516800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:58.937215+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 13516800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8e94000/0x0/0x4ffc00000, data 0x2303e9a/0x23c8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:59.937473+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 13516800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bf82f0e0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0b800 session 0x5634bfcb3860
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:00.965445+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 13516800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c025e000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e000 session 0x5634bd20b2c0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:01.966190+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:02.966380+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:03.966521+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:04.966659+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:05.966832+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:06.966967+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:07.967254+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:08.967384+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:09.967547+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:10.967696+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:11.967815+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:12.967953+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:13.968097+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:14.968270+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:15.969081+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:16.969265+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:17.969415+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:18.969569+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:19.969708+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:20.969843+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:21.970017+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:22.970145+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:23.970327+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:24.970527+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:25.971157+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:26.971336+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:27.971464+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:28.971632+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:29.971773+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:30.971994+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:31.972205+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:32.972379+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:33.972575+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:34.972732+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:35.973006+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:36.973172+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:37.973301+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 20602880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:38.974007+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 20602880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:39.974156+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 20602880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:40.974385+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 20602880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:41.974583+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:42.974693+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:43.974803+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:44.974925+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:45.975089+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:46.975332+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:47.975459+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:48.975627+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:49.975823+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:50.975980+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:51.976127+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:52.976252+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:53.976378+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:54.976504+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:55.976667+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:56.976863+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:57.977009+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:58.977160+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:59.977297+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:00.977481+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:01.977627+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:02.977737+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:03.977844+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:04.978047+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:05.978217+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:06.978334+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:07.978477+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:08.979464+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:09.979584+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:10.980043+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:11.980190+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:12.980302+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:13.980578+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:14.980962+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:15.981216+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:16.981363+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:17.981500+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:18.981788+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:19.981909+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:20.982027+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:21.982139+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122150912 unmapped: 20553728 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:22.982264+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: do_command 'config diff' '{prefix=config diff}'
Nov 24 10:15:34 compute-1 ceph-osd[77497]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:23.982367+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: do_command 'config show' '{prefix=config show}'
Nov 24 10:15:34 compute-1 ceph-osd[77497]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 24 10:15:34 compute-1 ceph-osd[77497]: do_command 'counter dump' '{prefix=counter dump}'
Nov 24 10:15:34 compute-1 ceph-osd[77497]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 24 10:15:34 compute-1 ceph-osd[77497]: do_command 'counter schema' '{prefix=counter schema}'
Nov 24 10:15:34 compute-1 ceph-osd[77497]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 20537344 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:24.982827+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 20865024 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:25.982990+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: do_command 'log dump' '{prefix=log dump}'
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 132923392 unmapped: 20824064 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:26.983120+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: do_command 'perf dump' '{prefix=perf dump}'
Nov 24 10:15:34 compute-1 ceph-osd[77497]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 24 10:15:34 compute-1 ceph-osd[77497]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 24 10:15:34 compute-1 ceph-osd[77497]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 24 10:15:34 compute-1 ceph-osd[77497]: do_command 'perf schema' '{prefix=perf schema}'
Nov 24 10:15:34 compute-1 ceph-osd[77497]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121241600 unmapped: 32505856 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:27.983254+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121282560 unmapped: 32464896 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:28.983455+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121282560 unmapped: 32464896 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:29.983599+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 32456704 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:30.983757+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 32456704 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:31.983901+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 32456704 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:32.984068+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 32456704 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:33.984231+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 32456704 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:34.984391+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 32456704 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:35.984557+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 32456704 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:36.984688+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 32456704 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:37.984821+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 32456704 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:38.985005+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 32456704 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:39.985123+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 32456704 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:40.985240+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 32448512 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:41.985372+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 32448512 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:42.985455+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 32448512 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:43.985582+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 32448512 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:44.985696+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 32448512 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:45.985846+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 32448512 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:46.985992+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 32448512 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:47.986462+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 32448512 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:48.986615+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 32448512 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:49.986745+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 32448512 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:50.986881+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 32448512 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:51.987053+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 32440320 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:52.987192+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 32440320 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:53.987343+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 32440320 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:54.987506+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 32440320 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:55.987647+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 32432128 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:56.987836+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 32432128 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:57.987952+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 32432128 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:58.988087+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 32432128 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:59.988205+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 32432128 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:00.988366+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 32432128 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:01.988506+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 32432128 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:02.988643+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 10K writes, 2914 syncs, 3.59 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2050 writes, 6533 keys, 2050 commit groups, 1.0 writes per commit group, ingest: 7.27 MB, 0.01 MB/s
                                           Interval WAL: 2050 writes, 892 syncs, 2.30 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 32423936 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:03.990341+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 32423936 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:04.990476+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 32423936 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:05.990607+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 32423936 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:06.990705+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 32423936 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:07.990855+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 32423936 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:08.991060+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:09.991264+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:10.991485+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:11.991635+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:12.991808+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:13.991965+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:14.992090+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:15.992236+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:16.992451+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:17.992640+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:18.992805+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:19.992949+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:20.993225+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:21.993393+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:22.993594+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:23.993733+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:24.993882+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:25.994692+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:26.994847+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:27.994982+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:28.995132+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:29.995306+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:30.995475+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:31.995635+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:32.995770+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:33.995994+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:34.996152+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:35.996295+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:36.996454+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:37.996617+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:38.996808+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:39.996977+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:40.997485+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:41.997645+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 32399360 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:42.997832+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 32399360 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:43.998065+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 32399360 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:44.998692+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 32399360 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:45.999075+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 32399360 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:46.999327+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 32399360 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:47.999538+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 32399360 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:48.999936+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 32399360 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:50.000100+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 32399360 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:51.000325+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 32399360 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:52.000533+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 32399360 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:53.000713+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 32399360 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:54.000997+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121356288 unmapped: 32391168 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:55.001359+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121356288 unmapped: 32391168 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:56.001634+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121356288 unmapped: 32391168 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:57.001953+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121356288 unmapped: 32391168 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:58.002199+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121356288 unmapped: 32391168 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:59.002501+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121356288 unmapped: 32391168 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:00.002746+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121356288 unmapped: 32391168 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:01.002981+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121356288 unmapped: 32391168 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:02.003237+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121356288 unmapped: 32391168 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:03.003424+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121356288 unmapped: 32391168 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:04.003605+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:05.003771+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:06.003954+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:07.004207+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:08.004377+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:09.004641+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:10.004801+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:11.005071+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:12.005239+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:13.005499+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:14.005660+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:15.005826+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:16.006010+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:17.007026+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:18.007529+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 32374784 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:19.008564+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 32374784 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:20.009181+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 32374784 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:21.009933+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 32374784 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:22.010149+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 206.406784058s of 206.570480347s, submitted: 57
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 32374784 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:23.010776+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 32374784 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:24.010986+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 32235520 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:25.011493+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf0fc400 session 0x5634c06e61e0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634c025e000
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121782272 unmapped: 31965184 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:26.011711+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121782272 unmapped: 31965184 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:27.011900+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121782272 unmapped: 31965184 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:28.012168+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121782272 unmapped: 31965184 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:29.012430+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121782272 unmapped: 31965184 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:30.012604+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121790464 unmapped: 31956992 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:31.012741+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121790464 unmapped: 31956992 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:32.012918+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121790464 unmapped: 31956992 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:33.013215+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121790464 unmapped: 31956992 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:34.013420+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121790464 unmapped: 31956992 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:35.013731+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121798656 unmapped: 31948800 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:36.014011+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121798656 unmapped: 31948800 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:37.014204+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121798656 unmapped: 31948800 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:38.014468+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121798656 unmapped: 31948800 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:39.014763+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121798656 unmapped: 31948800 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:40.014924+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121798656 unmapped: 31948800 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:41.015128+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121798656 unmapped: 31948800 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:42.015305+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121806848 unmapped: 31940608 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:43.015620+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121806848 unmapped: 31940608 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:44.015849+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121806848 unmapped: 31940608 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:45.016014+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121806848 unmapped: 31940608 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:46.016219+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121815040 unmapped: 31932416 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:47.016447+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121815040 unmapped: 31932416 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:48.016658+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121815040 unmapped: 31932416 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:49.016933+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121815040 unmapped: 31932416 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:50.017080+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121823232 unmapped: 31924224 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:51.017839+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121823232 unmapped: 31924224 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:52.018229+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121823232 unmapped: 31924224 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:53.018426+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121823232 unmapped: 31924224 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:54.018959+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 31916032 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:55.019485+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:56.019840+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:57.019999+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:58.020160+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:59.020431+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:00.020602+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:01.020907+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:02.021161+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:03.021447+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:04.021754+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:05.022020+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:06.022265+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:07.022540+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:08.022764+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:09.023084+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:10.023393+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:11.023684+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:12.023919+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:13.024163+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:14.024360+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:15.024590+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:16.024830+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:17.025078+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:18.025275+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:19.025506+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:20.025685+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:21.025913+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:22.026071+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:23.026465+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:24.026735+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:25.026888+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:26.027478+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:27.028025+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:28.028505+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:29.028946+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:30.029343+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:31.029671+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:32.029890+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:33.030052+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:34.030363+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:35.030670+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:36.030974+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:37.031241+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:38.031489+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121856000 unmapped: 31891456 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:39.031788+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121856000 unmapped: 31891456 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:40.032004+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121856000 unmapped: 31891456 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:41.032212+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121856000 unmapped: 31891456 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:42.032359+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121856000 unmapped: 31891456 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:43.032543+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121856000 unmapped: 31891456 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:44.032762+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121856000 unmapped: 31891456 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:45.032941+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121856000 unmapped: 31891456 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:46.033116+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121856000 unmapped: 31891456 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:47.033374+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121856000 unmapped: 31891456 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:48.033703+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121864192 unmapped: 31883264 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:49.033917+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121864192 unmapped: 31883264 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:50.034111+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121864192 unmapped: 31883264 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:51.034271+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121864192 unmapped: 31883264 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:52.034465+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121864192 unmapped: 31883264 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:53.034689+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121864192 unmapped: 31883264 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:54.034880+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121864192 unmapped: 31883264 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:55.035097+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121864192 unmapped: 31883264 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:56.035330+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121864192 unmapped: 31883264 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:57.035551+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 31866880 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:58.035732+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 31866880 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:59.035993+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 31866880 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:00.036222+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 31866880 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:01.036388+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 31866880 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:02.036575+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 31866880 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:03.036701+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 31866880 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:04.036906+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 31866880 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:05.037083+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 31866880 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:06.037315+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 31866880 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:07.037513+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 31858688 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:08.037688+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 31858688 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:09.037901+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 31858688 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:10.038076+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 31858688 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:11.038307+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 31858688 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:12.038473+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 31858688 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:13.038716+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 31858688 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:14.038886+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 31858688 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:15.039095+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 31858688 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:16.039268+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 31858688 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:17.039494+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 31858688 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:18.039696+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 31850496 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:19.039946+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 31850496 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:20.040111+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 31850496 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:21.040319+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 31850496 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:22.040551+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 31850496 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:23.041007+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 31850496 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:24.041201+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 31850496 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:25.041446+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 31850496 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:26.041683+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 31850496 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:27.041857+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 31850496 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:28.042069+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 31850496 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:29.042852+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 31842304 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:30.043469+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 31842304 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:31.044194+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 31842304 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:32.044571+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 31842304 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:33.045192+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 31842304 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:34.045703+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 31842304 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:35.046190+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 31842304 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:36.046664+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 31842304 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:37.047052+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 31842304 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:38.047468+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 31842304 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:39.047889+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 31842304 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:40.048096+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 31842304 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:41.048465+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121913344 unmapped: 31834112 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:42.048719+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121913344 unmapped: 31834112 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:43.049003+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121913344 unmapped: 31834112 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:44.049353+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121913344 unmapped: 31834112 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:45.049593+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121913344 unmapped: 31834112 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:46.049815+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121913344 unmapped: 31834112 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:47.050070+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121913344 unmapped: 31834112 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:48.050241+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121913344 unmapped: 31834112 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:49.050506+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:50.050740+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 31825920 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:51.050924+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 31825920 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:52.051167+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 31825920 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:53.051501+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 31825920 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:54.051715+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 31825920 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:55.051988+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 31825920 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:56.052243+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 31825920 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:57.052494+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 31825920 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:58.052745+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 31825920 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:59.053003+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 31825920 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:00.053228+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 31825920 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:01.053452+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 31825920 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:02.053653+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 31817728 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:03.053869+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 31817728 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:04.054051+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 31817728 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:05.054274+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 31817728 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:06.054500+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 31817728 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:07.054711+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 31817728 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:08.054861+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 31817728 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:09.055025+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 31817728 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:10.055168+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 31817728 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:11.055333+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 31817728 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:12.055524+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:13.055703+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:14.055918+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:15.056127+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:16.056297+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:17.056575+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:18.056729+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:19.056953+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:20.057165+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:21.057372+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:22.057515+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:23.057669+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:24.057878+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:25.058114+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:26.058307+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 31801344 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:27.058509+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 31801344 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:28.058677+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 31801344 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:29.058931+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 31801344 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:30.059107+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 31801344 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:31.059292+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 31801344 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:32.059456+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 31801344 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:33.059584+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 31801344 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:34.059688+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 31801344 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:35.059884+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 31801344 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:36.060026+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 31801344 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:37.060186+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 31801344 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:38.060366+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 31793152 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:39.060634+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 31793152 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:40.060820+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 31793152 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:41.060968+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 31793152 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:42.061095+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 31793152 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:43.061268+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 31793152 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:44.061444+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 31793152 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:45.061689+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 31793152 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:46.061824+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 31793152 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:47.061950+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 31793152 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:48.062091+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 31793152 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:49.062273+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121962496 unmapped: 31784960 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:50.062509+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:51.062656+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:52.062845+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:53.063043+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:54.063231+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:55.063467+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:56.063732+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:57.063911+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:58.064083+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:59.064305+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:00.064468+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:01.064616+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:02.064760+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:03.064923+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets getting new tickets!
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:04.065173+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _finish_auth 0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:04.066340+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:05.065327+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:06.065453+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 31768576 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:07.065641+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 31768576 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:08.065817+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 31768576 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:09.066026+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 31768576 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:10.066171+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 31768576 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:11.066324+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 31768576 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:12.066472+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 31768576 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:13.066593+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 31768576 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:14.066765+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 31768576 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:15.066961+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 31768576 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:16.067115+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 31768576 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:17.067250+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 31768576 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:18.067367+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121987072 unmapped: 31760384 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:19.067565+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121987072 unmapped: 31760384 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:20.067703+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121987072 unmapped: 31760384 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:21.067892+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121987072 unmapped: 31760384 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:22.068095+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121987072 unmapped: 31760384 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:23.068263+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121987072 unmapped: 31760384 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:24.068508+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121987072 unmapped: 31760384 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:25.068653+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121987072 unmapped: 31760384 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:26.068815+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121987072 unmapped: 31760384 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:27.068978+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121987072 unmapped: 31760384 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:28.069168+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121987072 unmapped: 31760384 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:29.069363+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121987072 unmapped: 31760384 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:30.069579+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:31.069737+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:32.069917+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:33.070072+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:34.070231+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:35.070384+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:36.070752+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:37.071029+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:38.071334+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:39.072339+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:40.072605+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:41.072783+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:42.073028+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:43.073254+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:44.073463+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122011648 unmapped: 31735808 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:45.073665+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122011648 unmapped: 31735808 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:46.073921+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122011648 unmapped: 31735808 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:47.074148+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122011648 unmapped: 31735808 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:48.074325+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122011648 unmapped: 31735808 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:49.074536+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122011648 unmapped: 31735808 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:50.074780+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122011648 unmapped: 31735808 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:51.075010+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122011648 unmapped: 31735808 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:52.075211+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122011648 unmapped: 31735808 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:53.075434+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122028032 unmapped: 31719424 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:54.075646+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122028032 unmapped: 31719424 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:55.075860+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122028032 unmapped: 31719424 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:56.076071+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122028032 unmapped: 31719424 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:57.076297+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122028032 unmapped: 31719424 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:58.076469+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122028032 unmapped: 31719424 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:59.076708+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122028032 unmapped: 31719424 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:00.076859+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122028032 unmapped: 31719424 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:01.077035+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122028032 unmapped: 31719424 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:02.077193+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122028032 unmapped: 31719424 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:03.077382+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:04.077601+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:05.077805+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:06.077955+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:07.078175+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:08.078507+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:09.078869+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:10.079094+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:11.079366+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:12.079575+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:13.079763+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:14.080001+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:15.080197+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:16.080381+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:17.080604+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:18.080840+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:19.081064+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122044416 unmapped: 31703040 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:20.081283+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122044416 unmapped: 31703040 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:21.081478+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122044416 unmapped: 31703040 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:22.081667+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122044416 unmapped: 31703040 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:23.081826+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122044416 unmapped: 31703040 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:24.081988+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122044416 unmapped: 31703040 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:25.082139+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122044416 unmapped: 31703040 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:26.082281+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122044416 unmapped: 31703040 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:27.082442+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122044416 unmapped: 31703040 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:28.082600+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122044416 unmapped: 31703040 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:29.082772+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122044416 unmapped: 31703040 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:30.082927+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122044416 unmapped: 31703040 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:31.083094+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31694848 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:32.083246+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31694848 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:33.083429+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31694848 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:34.083593+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31694848 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:35.083789+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31694848 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:36.083954+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31694848 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:37.084102+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31694848 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:38.084262+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31694848 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:39.084484+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:40.084655+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:41.084892+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:42.085148+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:43.085357+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:44.085481+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:45.085661+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:46.086392+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:47.086839+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:48.087538+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:49.088218+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:50.088349+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:51.088561+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:52.088912+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:53.089344+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:54.089587+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:55.089755+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:56.090028+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122068992 unmapped: 31678464 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:57.090227+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122068992 unmapped: 31678464 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:58.090459+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122068992 unmapped: 31678464 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:59.090757+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689000 session 0x5634c055b4a0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf032800
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122068992 unmapped: 31678464 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:00.090940+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122068992 unmapped: 31678464 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:01.091070+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:02.091284+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122068992 unmapped: 31678464 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:03.091505+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122068992 unmapped: 31678464 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:04.091705+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122068992 unmapped: 31678464 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:05.091930+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122068992 unmapped: 31678464 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:06.092074+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122077184 unmapped: 31670272 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:07.092261+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122077184 unmapped: 31670272 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:08.092514+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122077184 unmapped: 31670272 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:09.092749+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122077184 unmapped: 31670272 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:10.092973+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122077184 unmapped: 31670272 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:11.093158+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122077184 unmapped: 31670272 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:12.093470+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122077184 unmapped: 31670272 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:13.093672+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122077184 unmapped: 31670272 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:14.093887+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122077184 unmapped: 31670272 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:15.094076+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 31662080 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:16.094231+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 31662080 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:17.094392+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 31662080 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:18.094630+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 31662080 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:19.094873+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 31662080 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:20.095084+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 31662080 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:21.095284+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 31662080 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:22.095454+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 31662080 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:23.095672+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 31662080 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:24.095843+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 31662080 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:25.096067+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 31662080 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:26.096257+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 31662080 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:27.096441+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 31653888 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:28.096667+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 31653888 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:29.096930+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 31653888 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:30.097176+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 31645696 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:31.097532+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 31645696 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:32.097725+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 31645696 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:33.097888+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 31645696 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:34.098105+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 31645696 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:35.098373+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 31645696 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:36.098656+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 31645696 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:37.098830+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 31645696 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:38.099021+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 31645696 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:39.099329+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 31645696 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:40.099524+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 31645696 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:41.099694+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 31645696 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:42.099898+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:43.100074+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:44.100333+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:45.100516+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:46.100692+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:47.100840+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:48.101024+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:49.101239+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:50.101434+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:51.101592+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:52.101777+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:53.102014+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:54.102203+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:55.102462+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 31629312 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:56.102685+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 31629312 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:57.102964+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 31629312 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:58.103192+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 31629312 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:59.103639+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 31629312 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:00.103816+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 31629312 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:01.104320+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 31629312 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:02.104540+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:03.104740+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:04.104975+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:05.105204+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:06.105391+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:07.105658+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:08.105881+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:09.106138+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:10.106301+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:11.106513+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:12.106656+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:13.106815+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:14.106996+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:15.107232+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:16.107461+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:17.107671+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf538400 session 0x5634bd20ad20
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634beabcc00
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bd1e9000 session 0x5634bcfe03c0
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: handle_auth_request added challenge on 0x5634bf538400
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 31604736 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:18.107851+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122150912 unmapped: 31596544 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:19.108129+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122150912 unmapped: 31596544 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:20.108335+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:21.108540+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:22.108680+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:23.108811+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:24.108937+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:25.109063+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:26.109328+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:27.109504+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:28.109657+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:29.109828+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:30.109973+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:31.110154+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:32.110284+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:33.110509+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:34.110661+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:35.110822+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:36.111003+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:37.111147+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:38.111346+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:39.111566+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:40.111741+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:41.112032+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:42.112183+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:43.112371+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:44.112516+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:45.112724+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:46.112855+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122175488 unmapped: 31571968 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:47.113037+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122175488 unmapped: 31571968 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:48.113217+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122183680 unmapped: 31563776 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:49.113461+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122183680 unmapped: 31563776 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:50.113621+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122183680 unmapped: 31563776 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:51.113840+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122183680 unmapped: 31563776 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:52.114170+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122191872 unmapped: 31555584 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:53.115833+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122191872 unmapped: 31555584 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:54.116470+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122191872 unmapped: 31555584 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:55.117568+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122191872 unmapped: 31555584 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:56.118329+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122191872 unmapped: 31555584 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:57.118526+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122191872 unmapped: 31555584 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:58.118630+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122191872 unmapped: 31555584 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:59.118806+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122191872 unmapped: 31555584 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:15:00.118959+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122200064 unmapped: 31547392 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-1 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-1 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:15:01.119112+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122216448 unmapped: 31531008 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: do_command 'config diff' '{prefix=config diff}'
Nov 24 10:15:34 compute-1 ceph-osd[77497]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 24 10:15:34 compute-1 ceph-osd[77497]: do_command 'config show' '{prefix=config show}'
Nov 24 10:15:34 compute-1 ceph-osd[77497]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 24 10:15:34 compute-1 ceph-osd[77497]: do_command 'counter dump' '{prefix=counter dump}'
Nov 24 10:15:34 compute-1 ceph-osd[77497]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 24 10:15:34 compute-1 ceph-osd[77497]: do_command 'counter schema' '{prefix=counter schema}'
Nov 24 10:15:34 compute-1 ceph-osd[77497]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:15:02.119269+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122183680 unmapped: 31563776 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:15:03.119432+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 31203328 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 10:15:34 compute-1 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: tick
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-1 ceph-osd[77497]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:15:04.119580+0000)
Nov 24 10:15:34 compute-1 ceph-osd[77497]: do_command 'log dump' '{prefix=log dump}'
Nov 24 10:15:34 compute-1 sshd-session[257388]: Invalid user user from 164.92.213.168 port 44410
Nov 24 10:15:34 compute-1 sshd-session[257388]: Connection closed by invalid user user 164.92.213.168 port 44410 [preauth]
Nov 24 10:15:34 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 24 10:15:34 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1519524842' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:15:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:35.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:35 compute-1 ceph-mon[80009]: from='client.19056 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:35 compute-1 ceph-mon[80009]: from='client.26827 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:35 compute-1 ceph-mon[80009]: from='client.28334 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:35 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2776402292' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:15:35 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/535281277' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:15:35 compute-1 ceph-mon[80009]: from='client.19074 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:35 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1486480020' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:15:35 compute-1 ceph-mon[80009]: from='client.26848 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:35 compute-1 ceph-mon[80009]: from='client.19077 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:35 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2884202917' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:15:35 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/225897342' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 10:15:35 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1519524842' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:15:35 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 24 10:15:35 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2192234632' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:15:35 compute-1 nova_compute[230010]: 2025-11-24 10:15:35.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:15:35 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:35 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:35 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:35.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:36 compute-1 crontab[257616]: (root) LIST (root)
Nov 24 10:15:36 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 24 10:15:36 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4072984199' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:15:36 compute-1 ceph-mon[80009]: from='client.19089 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:36 compute-1 ceph-mon[80009]: from='client.26866 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:36 compute-1 ceph-mon[80009]: from='client.28364 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:36 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/358214863' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:15:36 compute-1 ceph-mon[80009]: from='client.19116 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:36 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2192234632' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:15:36 compute-1 ceph-mon[80009]: from='client.26881 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:36 compute-1 ceph-mon[80009]: from='client.28382 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:36 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/939475987' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 24 10:15:36 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/998320868' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 10:15:36 compute-1 ceph-mon[80009]: from='client.19143 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:36 compute-1 ceph-mon[80009]: pgmap v1471: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:36 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1095355665' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 10:15:36 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/4072984199' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:15:36 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Nov 24 10:15:36 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2470601515' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 10:15:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:37.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:37 compute-1 nova_compute[230010]: 2025-11-24 10:15:37.117 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:37 compute-1 ceph-mon[80009]: from='client.28400 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:37 compute-1 ceph-mon[80009]: from='client.26893 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:37 compute-1 ceph-mon[80009]: from='client.19161 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:37 compute-1 ceph-mon[80009]: from='client.28412 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:37 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2225375590' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 10:15:37 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1163184553' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 10:15:37 compute-1 ceph-mon[80009]: from='client.19176 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:37 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2470601515' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 10:15:37 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3911979888' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 24 10:15:37 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2154776243' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 24 10:15:37 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/4022284349' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 10:15:37 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1903975746' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 10:15:37 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/4159580487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:15:37 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/510893352' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 10:15:37 compute-1 nova_compute[230010]: 2025-11-24 10:15:37.613 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:37 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:15:37 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Nov 24 10:15:37 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3419392445' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 10:15:37 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:37 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:37 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:37.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:37 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Nov 24 10:15:37 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3340040457' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 10:15:37 compute-1 systemd[1]: Starting Hostname Service...
Nov 24 10:15:38 compute-1 systemd[1]: Started Hostname Service.
Nov 24 10:15:38 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Nov 24 10:15:38 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/946831100' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Nov 24 10:15:38 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1982131563' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 10:15:38 compute-1 ceph-mon[80009]: from='client.28433 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-1 ceph-mon[80009]: from='client.19185 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-1 ceph-mon[80009]: from='client.28454 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/965202489' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3451260201' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 10:15:38 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/4138227759' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2726888286' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 24 10:15:38 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/719084064' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 10:15:38 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3419392445' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-1 ceph-mon[80009]: pgmap v1472: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:15:38 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/4128979290' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 24 10:15:38 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3267547667' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3081976069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:15:38 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3340040457' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 10:15:38 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3259949579' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1498752122' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 24 10:15:38 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/946831100' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/154418130' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 10:15:38 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1200745248' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1942279183' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1982131563' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 10:15:38 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Nov 24 10:15:38 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3599221387' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-1 nova_compute[230010]: 2025-11-24 10:15:38.760 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:15:38 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Nov 24 10:15:38 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3983569009' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 24 10:15:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:39.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Nov 24 10:15:39 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/659856770' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 10:15:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Nov 24 10:15:39 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3371967134' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 10:15:39 compute-1 podman[258127]: 2025-11-24 10:15:39.405017003 +0000 UTC m=+0.125263363 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 24 10:15:39 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3262591560' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 10:15:39 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3129312953' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 10:15:39 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3599221387' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 10:15:39 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2809744926' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 10:15:39 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3983569009' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 24 10:15:39 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1586408974' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 24 10:15:39 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1590080519' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 24 10:15:39 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2390519076' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 24 10:15:39 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/659856770' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 10:15:39 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3371967134' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 10:15:39 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/293933469' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 24 10:15:39 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2092556208' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 24 10:15:39 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3796992106' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 10:15:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Nov 24 10:15:39 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1915736469' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 24 10:15:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Nov 24 10:15:39 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/305860648' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 10:15:39 compute-1 nova_compute[230010]: 2025-11-24 10:15:39.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:15:39 compute-1 nova_compute[230010]: 2025-11-24 10:15:39.793 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:15:39 compute-1 nova_compute[230010]: 2025-11-24 10:15:39.793 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:15:39 compute-1 nova_compute[230010]: 2025-11-24 10:15:39.794 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:15:39 compute-1 nova_compute[230010]: 2025-11-24 10:15:39.794 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:15:39 compute-1 nova_compute[230010]: 2025-11-24 10:15:39.794 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:15:39 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:39 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:39 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:39.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Nov 24 10:15:39 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1378070210' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 24 10:15:39 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Nov 24 10:15:39 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2716061122' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 24 10:15:40 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:15:40 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/103572047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:15:40 compute-1 nova_compute[230010]: 2025-11-24 10:15:40.279 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:15:40 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 24 10:15:40 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1315247719' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 10:15:40 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Nov 24 10:15:40 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4202446663' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 24 10:15:40 compute-1 ceph-mon[80009]: from='client.27025 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:40 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1915736469' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 24 10:15:40 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/305860648' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 10:15:40 compute-1 ceph-mon[80009]: from='client.27037 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:40 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/3704579427' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 24 10:15:40 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/410298879' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 24 10:15:40 compute-1 ceph-mon[80009]: from='client.27043 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:40 compute-1 ceph-mon[80009]: pgmap v1473: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:40 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1378070210' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 24 10:15:40 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2716061122' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 24 10:15:40 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1993983005' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 24 10:15:40 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/103572047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:15:40 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1315247719' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 10:15:40 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/4202446663' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 24 10:15:40 compute-1 nova_compute[230010]: 2025-11-24 10:15:40.475 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:15:40 compute-1 nova_compute[230010]: 2025-11-24 10:15:40.476 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4547MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:15:40 compute-1 nova_compute[230010]: 2025-11-24 10:15:40.476 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:15:40 compute-1 nova_compute[230010]: 2025-11-24 10:15:40.477 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:15:40 compute-1 nova_compute[230010]: 2025-11-24 10:15:40.546 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:15:40 compute-1 nova_compute[230010]: 2025-11-24 10:15:40.551 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:15:40 compute-1 nova_compute[230010]: 2025-11-24 10:15:40.571 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:15:40 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Nov 24 10:15:40 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1938538434' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 24 10:15:41 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:15:41 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1159984311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:15:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:41.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:41 compute-1 nova_compute[230010]: 2025-11-24 10:15:41.061 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:15:41 compute-1 nova_compute[230010]: 2025-11-24 10:15:41.069 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:15:41 compute-1 nova_compute[230010]: 2025-11-24 10:15:41.084 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:15:41 compute-1 nova_compute[230010]: 2025-11-24 10:15:41.087 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:15:41 compute-1 nova_compute[230010]: 2025-11-24 10:15:41.088 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:15:41 compute-1 ceph-mon[80009]: from='client.27052 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:41 compute-1 ceph-mon[80009]: from='client.19332 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:41 compute-1 ceph-mon[80009]: from='client.27076 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:41 compute-1 ceph-mon[80009]: from='client.28601 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:41 compute-1 ceph-mon[80009]: from='client.19350 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:41 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/257434389' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 24 10:15:41 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1938538434' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 24 10:15:41 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1159984311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:15:41 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/325384731' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 24 10:15:41 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:41 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:41 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:41.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:42 compute-1 nova_compute[230010]: 2025-11-24 10:15:42.119 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:42 compute-1 ceph-mon[80009]: from='client.27091 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-1 ceph-mon[80009]: from='client.28616 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-1 ceph-mon[80009]: from='client.19356 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-1 ceph-mon[80009]: from='client.19389 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-1 ceph-mon[80009]: from='client.28649 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-1 ceph-mon[80009]: from='client.27109 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-1 ceph-mon[80009]: from='client.28640 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:42 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2137382171' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 24 10:15:42 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1569002270' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-1 ceph-mon[80009]: from='client.28661 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-1 ceph-mon[80009]: from='client.19407 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-1 ceph-mon[80009]: from='client.27127 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-1 ceph-mon[80009]: pgmap v1474: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:15:42 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1695737134' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 24 10:15:42 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/39772812' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:15:42 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:15:42 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:15:42 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:15:42 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1720225108' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:15:42 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:15:42 compute-1 nova_compute[230010]: 2025-11-24 10:15:42.614 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:42 compute-1 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:15:42 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Nov 24 10:15:42 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2091781462' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 24 10:15:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:15:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:43.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:15:43 compute-1 nova_compute[230010]: 2025-11-24 10:15:43.088 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:15:43 compute-1 nova_compute[230010]: 2025-11-24 10:15:43.089 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:15:43 compute-1 nova_compute[230010]: 2025-11-24 10:15:43.089 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:15:43 compute-1 nova_compute[230010]: 2025-11-24 10:15:43.105 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:15:43 compute-1 nova_compute[230010]: 2025-11-24 10:15:43.106 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:15:43 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0)
Nov 24 10:15:43 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/100271252' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 24 10:15:43 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:15:43 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:15:43 compute-1 ceph-mon[80009]: from='client.28670 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:43 compute-1 ceph-mon[80009]: from='client.27139 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:43 compute-1 ceph-mon[80009]: from='client.19431 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:43 compute-1 ceph-mon[80009]: from='client.28685 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:43 compute-1 ceph-mon[80009]: from='client.19461 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:43 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:15:43 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:15:43 compute-1 ceph-mon[80009]: from='client.28700 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:43 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2091781462' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 24 10:15:43 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/998003300' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 24 10:15:43 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3895810545' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 24 10:15:43 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:15:43 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:15:43 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:15:43 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:15:43 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/100271252' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 24 10:15:43 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:15:43 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:15:43 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Nov 24 10:15:43 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2412318721' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:15:43 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:43 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:43 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:43.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:44 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Nov 24 10:15:44 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1546744390' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 24 10:15:44 compute-1 ceph-mon[80009]: from='client.19476 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:44 compute-1 ceph-mon[80009]: from='client.28724 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:44 compute-1 ceph-mon[80009]: from='client.27178 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:44 compute-1 ceph-mon[80009]: from='client.28739 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:44 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2221911114' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 24 10:15:44 compute-1 ceph-mon[80009]: pgmap v1475: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Nov 24 10:15:44 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/2412318721' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:15:44 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/558850248' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 24 10:15:44 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1546744390' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 24 10:15:44 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/3438556666' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 24 10:15:44 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:15:44 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:15:44 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/868985174' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 24 10:15:44 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:15:44 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:15:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:15:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:45.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:15:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Nov 24 10:15:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1739736414' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 24 10:15:45 compute-1 podman[258986]: 2025-11-24 10:15:45.439597699 +0000 UTC m=+0.058041114 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 24 10:15:45 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 10:15:45 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:15:45 compute-1 ceph-mon[80009]: from='client.19524 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:45 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:15:45 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:15:45 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1765316069' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 24 10:15:45 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:15:45 compute-1 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:15:45 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/2045062225' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 24 10:15:45 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/1739736414' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 24 10:15:45 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/664005702' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 24 10:15:45 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/1269085449' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 24 10:15:45 compute-1 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:15:45 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:45 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:45 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:45.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:46 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Nov 24 10:15:46 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3294680015' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 24 10:15:46 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0)
Nov 24 10:15:46 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3434273546' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 24 10:15:46 compute-1 ceph-mon[80009]: from='client.27229 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:46 compute-1 ceph-mon[80009]: from='client.28805 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:46 compute-1 ceph-mon[80009]: pgmap v1476: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:46 compute-1 ceph-mon[80009]: from='client.? 192.168.122.100:0/4035074411' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 24 10:15:46 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/2445363141' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 24 10:15:46 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3294680015' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 24 10:15:46 compute-1 ceph-mon[80009]: from='client.? 192.168.122.102:0/1552895444' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 24 10:15:46 compute-1 ceph-mon[80009]: from='client.? 192.168.122.101:0/3434273546' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 24 10:15:47 compute-1 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 10:15:47 compute-1 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:15:47 compute-1 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:47.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:15:47 compute-1 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Nov 24 10:15:47 compute-1 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1101154622' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 24 10:15:47 compute-1 nova_compute[230010]: 2025-11-24 10:15:47.121 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
